Indie Games Are Less Democratic Than The Ivory Tower

The recent IGF kerfuffle has reinforced my view that the judging of submissions to games conferences is broken. (I tried to pick a fight on this topic with Ian Schreiber back at the GDC deadline, but he is a much nicer and more gracious person than I am, and I didn’t get anywhere.) I think there is a straightforward fix for this: Clone the review process of SIGCSE, the computer science education conference.

Is it really broken?

Let’s compare some statistics about IGF and SIGCSE:

Submissions 570 289
Reviews per submission 8 6.6
Total reviews 4560 1907
Reviewers 150 810
Reviews per judge 30.4 2.35

This analysis is based on data from the IGF 2012 press release and the SIGCSE 2012 program. My conclusion is that IGF judges are asked to do an impossible task. Many entrants claim that IGF judges don’t actually play the games, and this seems totally unsurprising to me.

(One note before we pass on to recommendations: In this I’m only talking about the first round of judging, the nominating process. With IGF the nominated games get evaluated by jurists. At computer science conferences the best paper and best poster awards are selected by the program committee from the accepted papers. So the process of selecting the best entries, once the “good” entries are identified, is already similar.)

Not a shadowy elite

In his response to the Rotting Cartridge post, IGF organizer Brandon Boyer said that the IGF judging is not run by a shadowy elite who conspire to promote their own. I believe him. They are an accidental elite. That they are an elite is unarguable: If the ratio of submitters to deciders is worse in the IGF than in the Ivory Tower, what else can we call it?

Solution 1: All submitters are judges

If you are an IGF person who has never been to a computer science conference, you might wonder how SIGCSE manages to find 800 volunteer judges, year after year. The answer is pretty simple: Almost everyone that submits something gets asked to judge. And, year after year, they tend to get asked back to judge, even if they do not submit again. And we (prospective judges) generally say yes because the burden is low (2 reviews? Sure, no sweat!), and because we want to pay it forward. We remember how we felt when we saw the 7 reviews of our submission, and how much we appreciated the helpful feedback (and hated the 1 or 2 crank reviews). We feel motivated to do a good job.

I have two students that submitted to IGF (one to the student competition, and one via the Pirate Kart), and they would have been excellent judges. If they had been given 2 or 3 games to judge, they would have played the games thoroughly and thoughtfully. The submitters would probably even have received useful feedback, instead of snide remarks.

Solution 2: Rate the raters

SIGCSE is not immune from the problem of crank or slapdash reviews. We can all point to nonsensical reviews we have gotten (e.g. “This paper makes no sense, because 3-coloring a graph is easy”). This is why SIGCSE uses its core volunteers not as reviewers, but as meta-reviewers. After the reviews come in, the meta-reviewers summarize the reviews, throwing out reviews that are clearly nonsensical, or clearly biased one direction or the other. (Note, however, that the author still sees all the reviews, even those that the meta-reviewers discard. This keeps the process transparent.)

Reviewers that routinely produce useless results are not asked back. IGF judges that don’t play the games should not be asked back.

Conclusion, and bonus round

IGF should be run like a conference, not a juried art show. The purpose of this is not to take the best and brightest game designers and give them a ton of drudge work. The goal is to identify and recognize great indie games, and democratizing the review process would help this, not hinder it.

As an added bonus, following these procedures would give indie developers what they crave most: Players and feedback. The fight I attempted to pick with Ian Schreiber was about the lack of feedback on rejected GDC session proposals. If you provide no feedback you will continue to see worthless submissions year after year because there is no mechanism for submitters to improve their work. A SIGCSE submission gets about 1500 words of reviewer feedback. How much useful feedback does a rejected GDC or IGF submission get?


Thanks to Ben Sironko and Denver Coulson for their helpful discussions on this post.

4 Responses to “Indie Games Are Less Democratic Than The Ivory Tower”

  1. One note to be fair if comparing IGF to Ivory Tower. Most academic journals and conferences have more paperwork in their review process. Filling out lengthy comments that score a submission on a dozen criteria and then writing a few pages to justify your scores – as an absolute requirement – is not uncommon. When spending so much time with a single submission, one would expect each judge to be given relatively few items.

    In this context, the original Rotting Cartridge post has it wrong. The author of that piece claims that “not having enough reviewers” is not the problem, pointing to the low perceived quality of reviews received. But I think Bo makes a great point here, that by comparing reviews-per-reviewer it’s clear that not enough reviewers IS in fact the root issue here. As Rotting Cartridge noted, feedback from reviewers is cursory in most cases, and written feedback is in fact optional. When you’ve got this many games to judge, you’re going to be encouraged to make snap judgments. (It’s much the same issue as professional game reviewers who are given a large stack of games and told to write these up in a few days. You can bet most of those games won’t get much attention because there’s deadlines to meet.)

    Mathematically, then, there are several solutions here:

    1. Reduce the number of games to be judged. This could mean some kind of up-front pre-selection process (such as the democratized judging you mention here) or raising the barrier to apply (higher fees, longer forms, immediate elimination if you leave out a little detail like “oh, our game is actually iOS but we put PC by mistake”, or other draconian measures). I’m not saying this is optimal, mind you, just putting all options on the table.

    2. Increase the number of judges, which reduces number of games-per-judge. If I had 3 games to play instead of 16, you can bet I’d give them a bit more thorough of an analysis. This could mean the democratized judging, or just accepting a lot more judges into the current fold. (It’s harder to call this “elite” if the ranks grow by 4x next year, no?) I know I’ve sent in several recommendations for people I know who would make good judges… if others did the same (i.e. if Brandon put out a loud call for judges to PLEASE send him names, or even put out a public call) the numbers could grow considerably.

    Also, two factual things of note you should factor into your analysis:

    First, the ratings are in fact already democratized by one layer. Standard judges (like me) just send in feedback for games and give them nominations. A separate group of meta-judges reviews the nominated games and selects finalists from there. So really what you are proposing is a second tier of the same thing, just wider. So you have a large pool of raters that selects a majority of games to eliminate on the first cut, judges get to play a few of the remaining ones more thoroughly and nominate, and then meta-judges pick the finalists…

    Second, your figure of 30 games reviewed per judge is a little misleading out of context. Speaking for myself, I only received 16 games on my list of “must judge”, so I’ll go out on a limb and assume the same was true of other judges. That the ACTUAL number of reviews was nearly twice as much, says that the average judge actually rates almost 2x as many games as they’re given, with the remaining ones being purely voluntary! That tells quite a different story from the RottenCartridge article…

  2. Bo Brinkman says:

    Thanks for your response Ian, it has made my day.

    I agree that the IGF method (judges + jurists) is already the two-tier format that I recommend, except that the number of judges is too small for the number of submissions. I see your point that the jurists take the place of the meta-reviewers, so there isn’t a need for meta-reviewers in the IGF system.

    And I quite agree that if the number of judges went up by a factor of 4 or 6 next year it would defeat the elitism complaints.

  3. Ben Sironko says:

    I think increasing the number of judges via a democratized system would do wonders for the system. Not only would it handle the attention problem but I think it would also handle the relatively ignored problem of homogenized taste in the crowd that judges the IGF (“I love Super Crate Box/Fez/Other Hip, Colorful, Chiptune-Filled Indie Game and if this game is not like Super Crate Box/Fez/Other Hip, Colorful, Chiptune-Filled Indie Game it’s probably bad.”) by allowing other interests to at least have a say in the nomination process.

  4. Ben Sironko says:

    I also love Super Crate Box so I’m not saying that those games don’t deserve their acclaim, I just think other games get overlooked because of some internalized understanding of what constitutes “quality” in indie games.

Leave a Reply