geoffoofilm alert

>>Home

__________

LINKS

Film Impact Rating

Our paper Australian films at large: expanding the evidence about Australian cinema performance was explicitly intended as a discussion starter – to open a debate about how we (the industry, the public, commentators and scholars) identify and measure the value of Australian films. To this end our draft model for the Film Impact Rating (FIR) is readily available as a public tool (http://www.reelmeasures.com) to encourage members of the industry and the public to actively “think through” their own strategies and criteria for evaluating the impact of Australian films.

In preparing the FIR we were particularly motivated to address the question of an expanded impact measure for Australian films in the wake of a broad public discussion about the “failure” of Australian cinema based on only one simple criterion – domestic box office performance. Films that might otherwise be described as success stories, such as The Babadook, have been written down in industry commentaries because they ‘failed’ to generate sufficient local audience interest. The Babadook is an interesting case in point since it has been relatively successful in other aspects of the FIR - such as critical evaluation, coverage and international box office.

We are especially pleased that Bruce and Geoff have waded in and made a really useful contribution to this discussion. Their Response makes a number of excellent points that we would like to take up in detail.

1. Is the FIR too skewed to cultural factors?

Our first draft of the FIR seeks to strike a balance between fourteen units of measurement for which reliable data are readily available and which we grouped into three categories: Commercial factors (incorporating the traditional box-office return measure and box office relative to production budget size, 24%), Commentary factors (covering critic and user ratings as well as award nominations and wins, 37%) and Coverage factors (including data concerning the location, volume, and saturation of film screenings, 39%). These broadly align to three industrial aspects of cinema diffusion (distribution, criticism and exhibition). Bruce and Geoff suggest our weightings are too skewed to cultural rather than commercial factors.

A detailed breakdown of all the factors we took into consideration demonstrates that we were primarily interested in identifying balance across all 14 factors used to produce the FIR rather than thinking about the how these ‘added up’ in terms of the three categories:

Film coverage (Total 39%)

  • Number of countries visited 9%
  • Number of domestic screenings 8%
  • Number of international screenings 8%
  • Number of venues the film screened in 7%
  • Venue saturation 7%

Financial performance (Total 24%)

  • Domestic box office receipts 7%
  • International box office receipts 7%
  • Production budget as a percentage of worldwide box office 10%

Critical Acclaim (Total 37%)

  • Average user rating on IMDb 7.5%
  • Number of IMDb users polled 4%
  • Average critics rating on Rotten Tomatoes 7.5%
  • Number of critics polled on Rotten Tomatoes 4%
  • Number of award nominations received 6%
  • Number of awards won 8%

We thoroughly welcome a debate about whether we are “too focused” on cultural factors reflected via the weightings assigned to the variables within Critical Acclaim  (37%) or whether Bruce and Geoff are “too focused” on commercial factors. Indeed it is our fervent hope that this higher-level discussion of how we measure the success or failure of Australian films is taken up more widely.

Bruce and Geoff suggest that, “FIR seems more in the nature of an abstraction than an empirical comparative measurement of each film's commercial and cultural performance” and contend that we propose that a film such as Mystery Road has a third the impact of The Great Gatsby because its FIR is numerically one third of Gatsby’s. If this was the case then the FIR would certainly be counter-intuitive, but applying a benchmarking exercise such as Bruce and Geoff propose, to films produced with such vastly different budgets, misses the point. A size ten shoe isn’t twice as big as a size five shoe.  Mystery Road and The Great Gatsby are apples and oranges for the purpose of exact statistical comparison. There are just not enough similarly budgeted films in the Australian case study over the period considered to be able to create a series of indices based on the comparison of similar films. Instead, Mystery Road’s FIR is better grasped by understanding “impact” as a measurement of success relative to opportunity rather than an exact point on a fixed scale. So whilst the FIR is a fledgling attempt to make sense of the impact of highly differentiated titles relative to each other, it is not intended to be read as a ratio scale.

At its heart FIR is intended to think about alternative ways to measure or conceptualize film success that extends beyond box office not to superimpose expectations of impact based on what is already valued (such as budget size or box-office performance). In this sense we are happy to let the data speak for itself without determining if an individual film’s rating is somehow right or wrong. The weightings we assigned are based on our informed judgments and a desire to produce more balance in the evaluation of film success but equally we recognize that the weightings are open to debate. The benefit of the online tool at the FIR website (http://reelmeasures.com) is that users can manipulate the weightings to generate their own ranking algorithm to see how each of the components impacts results.

2. The three categories of the FIR should be expanded to five.

This is definitely an interesting approach. Given the apparent relationship between production budget and FIR rank this suggestion offers an even more transparent way to present our data. The key issue here however is transparency. The film industry provides very little open access to performance data at the sectoral level. Some of the production budget figures used to calculate the FIR were provided to us in confidence and our methodology was designed to ensure that these figures were appropriately masked. Since precise figures were not available in all instances, we used instead incrementally scaled production budget bands to distinguish specific titles.

3. Incorporating ancillary access data

As it stands the FIR is a measure of a film’s cinematic impact. In order to support this study a range of available data were deployed. We weren’t in a position to add television and other online measures (in large part due to a lack of accessible data) but they would certainly make a useful supplement to understanding the impact of films beyond the walls of the cinema. Consequently, the FIR is a restricted measure of impact in formal theatrical and festival contexts only and should be understood accordingly.

4. Questioning the intention of the authors

Bruce and Geoff’s Response imputes an intent to the article that does not bear scrutiny. It suggests for example that the FIR statistical model is “overdetermined” because we mean to use the FIR to support a covert case for the public funding of feature films in Australia based on cultural rather than commercial factors. We are an interdisciplinary team (film scholar, cultural economist, geo-spatial scientist) with varying degrees of academic investment in the film industry per se. We are however, genuinely excited by the possibilities for rethinking the conventions by which the industry is understood and held up for evaluation. And whilst in the current political environment argumentum ad hominem seems almost mandatory, as a matter of principle we are not interested in second-guessing the personal pushes of our respondents. If this discussion of value in the film industry suggests a different or better way to measure success, this would still be a wonderful outcome for our endeavors.

To incorporate the cultural is not in any way to “overdetermine” the results but rather to account for factors that contribute to value “beyond price” and that are not measured in purely monetary terms. Cultural economics is littered with studies that use a variety of methods, like contingent valuation to account for cultural value that is not easily or directly quantifiable via normal price signals but still certainly exists. It would be remiss to ignore this.

If it becomes apparent that we have given cultural factors more weight than policymakers or industry observers are inclined to apply, then those interested can use our online FIR tool and assign their own weightings to produce an alternative impact rating. We are collecting these alternative weightings and will be incorporating this feedback into our modeling in the future.   

The initial feedback from the website that users of the tool place the greatest importance on the variable Production Budget As A Percentage of Worldwide Box Office, closely followed by Average Critics Rating on Rotten Tomatoes. If we break this feedback into categories, users placed the most emphasis (based on average weighting assigned) on Critical Acclaim followed by Financial Performance, and with least importance placed on Film Coverage (users on average gave the variables Venue Saturation and Number of Venues the Film Screened In particularly low weightings). This certainly paints a very different picture to our idea of measuring film impact in which we placed a large emphasis on Film Coverage. It also differs from Bruce and Geoff’s preference for greater weighting on commercial factors. We will be able to use this information to reconsider our ideas of impact. The online tool was designed to engage the public, create discussion, and provide an opportunity for the public to enhance our own work and understanding.

The Response also suggests that our use of the typical statistical process of normalization was applied specifically in order to “rectify a wrong”. Given that the process of normalization was applied consistently across the dataset on all variables for each film in the dataset and was a necessary statistical process, there was no premeditated intent to statistically manipulate or tweak the results of FIR to favor certain titles. The application of different weightings to the variables will obviously impact the resultant FIR and rank order of films in the sample. In comparing the rank order and FIR of different film titles, it is primarily the weightings given to each variable rather than the application of normalization techniques which scale the data that matter.

To reiterate; our intention in preparing the FIR was to prompt debate and discussion about how to value and account for film impact in a more holistic way that moves beyond domestic box office. We believe that our underlying FIR estimation technique is sound, and that while assigned weightings are necessarily going to involve judgments, this does not render the FIR biased or over deterministic when these are transparently stated. We hope that those interested from the public, in the industry and in academia will visit http://www.reelmeasures.com/ and provide us with further insight into what specific factors contribute most to film impact. The Response from Geoff and Bruce is an incredibly useful addition to this intention.