How do you determine quantitative worth for a de facto subjective experience? Is there even any point? Can you make related “values” actually relatable if those “values” are arguably arbitrary?
I’ve toyed with somehow ‘rating’ the films, books etc. that I keep track of in my Month in Media posts since I started that series. It initially appears like a logical framework to include; after all, each piece of media is just a single node in a much larger and, crucially, definable category such as “Film” or “TV”. Each month I state my opinions on a selection of these data points, but why do I bother? Collectively, reviews alone don’t allow for any insight into viewing habits or present any meaningful conclusions. I can look over the lists that I’ve effectively generated and work out whether I watch more science fiction than animation, or vice versa, but it can only offer broad strokes without any real depth.
However, the moment you start attempting to quantise reviews, which are by their nature highly personal and often adapt over time, you hit on some pretty big issues – see opening enquiries. The first hurdle is that your own thoughts and opinions can vary depending on your emotions, your current location or the setting in which the media is consumed. A decent movie, when watched with a group of good friends, can become a brilliant movie simply due to the associations that are created. The second hurdle is that, even if you can overcome external influences, you still have to apply the ratings consistently. In the basest form, this implies a need for a checklist, a system of simple requirements for a piece of media to rank within a preset bounding range. But that checklist must, therefore, be utterly fair; it cannot weight one element above another, nor arbitrarily inhibit the progression of positive or negative elements. If you were to dissect a film rating, for example, would you expect the soundtrack to receive equal weighting to the direction? What then happens when a film is considered a ‘must-watch’ because of the direction but the score is utterly laughable? This highlights the third and final issue with rating content: once something is quantised, it is able to be ranked. Now every review no longer sits independently of the others, instead they become utterly connected. It may be simple to decide that a film is a good film, but is it better than that other good film you saw last week? Should it get the same score, a higher one or a lower one? Once ranked, the collective now have an intertwined meaning, a meaning that is (circularly) only as strong as the methodology behind it. That means, once you’ve put everything in place and gone through all that effort, if at any time you realise that some element is incorrectly weighted, missing or false, the entire dataset is corrupted.
Years ago I “wrote” video game reviews for a friends website, with the aim of getting them pushed out to several big gaming forums at the time. The ambitions never paid off (I was consistently too young) but the experience was my first time attempting to fit subjective experiences within rating systems. Different websites ranked video games differently. Places like IGN used an x/10 system, Nintendo Official Magazine rated out of 100%, Ctrl+Alt+Del based worth on five stars. My reviews tried to fit all these systems (and more) by heavily compartmentalising my scoring system. The soundtrack was x/5, the animation x/10, the story x/10 and so on until I had a final score, hopefully weighted in a balanced manner, resulting in a total that was divided into parts, each of which I could then convert into one part of a star, percentage point etc. The whole system took me days to come up with and refine so, when it was ‘done’, I wanted to test it. I wrote a couple of reviews of popular games at the time (Twilight Princess is the only one I can remember) and used my checklist to score them. I converted the scores into the various ratings systems and then compared my given rank with that of the actual website. Needless to say, most were quite different but I was expecting that. My opinion would not necessarily gel with the other reviewers. What I was surprised by was how differently my review scores ranked within each organisations charts. Sometimes, a game was right up near the top of the pile on one website but in the middle somewhere else. Between services, a single review score could dramatically alter the perceived worth of a game from being a GOTY contender to an average, barely notable experience. Internally, my reviews were consistent (I made sure of that) but when placed in the context of another persons ranking system… they fell apart.
In other words, I’ve overthought this to an extreme level and been burnt in the past, so when it came to writing my MiMs I just didn’t bother. But now it’s the end of the year and I would like a way to do a “Year in Review” type set of articles. I need to be able to rank the films, books etc. I’ve consumed in 2016 but I don’t want to do it from memory; that adds a secondary level of subjectivity to proceedings. No, I want to see what I thought of them when I wrote their reviews, not what I think of them now that months have been and gone. So I’m going to give some thought to a simple, yet fair, set of criteria that I can use to quantise my enjoyment of a product. I’ll begin with movies and TV only, as books are too different a beast to be mixed in. I’m already keeping track of my initial gut reaction over on Trakt for the films I’ve watched so far in January. Hopefully I can use that to backfill once I’ve sussed a system I’m content with. In the meantime, I guess its time to start trying out some systems!