The recent IGF kerfuffle has reinforced my view that the judging of submissions to games conferences is broken. (I tried to pick a fight on this topic with Ian Schreiber back at the GDC deadline, but he is a much nicer and more gracious person than I am, and I didn’t get anywhere.) I think there is a straightforward fix for this: Clone the review process of SIGCSE, the computer science education conference.
Is it really broken?
Let’s compare some statistics about IGF and SIGCSE:
|Reviews per submission||8||6.6|
|Reviews per judge||30.4||2.35|
This analysis is based on data from the IGF 2012 press release and the SIGCSE 2012 program. My conclusion is that IGF judges are asked to do an impossible task. Many entrants claim that IGF judges don’t actually play the games, and this seems totally unsurprising to me.
(One note before we pass on to recommendations: In this I’m only talking about the first round of judging, the nominating process. With IGF the nominated games get evaluated by jurists. At computer science conferences the best paper and best poster awards are selected by the program committee from the accepted papers. So the process of selecting the best entries, once the “good” entries are identified, is already similar.)
Not a shadowy elite
In his response to the Rotting Cartridge post, IGF organizer Brandon Boyer said that the IGF judging is not run by a shadowy elite who conspire to promote their own. I believe him. They are an accidental elite. That they are an elite is unarguable: If the ratio of submitters to deciders is worse in the IGF than in the Ivory Tower, what else can we call it?
Solution 1: All submitters are judges
If you are an IGF person who has never been to a computer science conference, you might wonder how SIGCSE manages to find 800 volunteer judges, year after year. The answer is pretty simple: Almost everyone that submits something gets asked to judge. And, year after year, they tend to get asked back to judge, even if they do not submit again. And we (prospective judges) generally say yes because the burden is low (2 reviews? Sure, no sweat!), and because we want to pay it forward. We remember how we felt when we saw the 7 reviews of our submission, and how much we appreciated the helpful feedback (and hated the 1 or 2 crank reviews). We feel motivated to do a good job.
I have two students that submitted to IGF (one to the student competition, and one via the Pirate Kart), and they would have been excellent judges. If they had been given 2 or 3 games to judge, they would have played the games thoroughly and thoughtfully. The submitters would probably even have received useful feedback, instead of snide remarks.
Solution 2: Rate the raters
SIGCSE is not immune from the problem of crank or slapdash reviews. We can all point to nonsensical reviews we have gotten (e.g. “This paper makes no sense, because 3-coloring a graph is easy”). This is why SIGCSE uses its core volunteers not as reviewers, but as meta-reviewers. After the reviews come in, the meta-reviewers summarize the reviews, throwing out reviews that are clearly nonsensical, or clearly biased one direction or the other. (Note, however, that the author still sees all the reviews, even those that the meta-reviewers discard. This keeps the process transparent.)
Reviewers that routinely produce useless results are not asked back. IGF judges that don’t play the games should not be asked back.
Conclusion, and bonus round
IGF should be run like a conference, not a juried art show. The purpose of this is not to take the best and brightest game designers and give them a ton of drudge work. The goal is to identify and recognize great indie games, and democratizing the review process would help this, not hinder it.
As an added bonus, following these procedures would give indie developers what they crave most: Players and feedback. The fight I attempted to pick with Ian Schreiber was about the lack of feedback on rejected GDC session proposals. If you provide no feedback you will continue to see worthless submissions year after year because there is no mechanism for submitters to improve their work. A SIGCSE submission gets about 1500 words of reviewer feedback. How much useful feedback does a rejected GDC or IGF submission get?
Thanks to Ben Sironko and Denver Coulson for their helpful discussions on this post.