Around this time of the year (starting with the rebuttal phase and then again when notifications are out) we have heated discussions about the review process and quality of our main ACM SIGCHI conference. Much of the discussion can be summarized as: ACM SIGCHI is a lottery, I got the wrong reviewers, they did not carefully read my paper, and this should have been accepted and the committee is too dumb to see it. I strongly disagree with these statements (and I am probably one of the persons with the most CHI rejects over the last years), but if we as a community feel this way we have to change the process.
Why to increase the acceptance rate?
If we compare our prime venue with a lottery it would mean putting value to these publications, e.g. in hiring, would be absolutely wrong. As people in the community are really proud of their publications in CHI (and I have not seen much talk about lottery on the accepted ones) it seems that we only accept too few.Looking through my Facebook comments and at the tweets during the rebuttal phase, many well established colleagues were surprised that their best papers did not get good reviews. So people with the highest esteem and highest peer recognition (CHI Academy, former chairs, …) are not able to know whether the paper they consider perfect gets in or not. They cannot judge the quality of their own work with regard to the CHI review process. If this is the case we are probably too selective and reject good work, hence my proposal to increase acceptance rate significantly, e.g. to 50%. The same people complaining about their paper not getting in rant at the conference about the low quality of papers and that we need to be more selective. Is this another filter bubble effect?
By increasing the acceptance rate to 50% we would make the conference predictable. It would no longer depend on the SC, ACs, 2nd ACs, and reviewers whether you get in or not. Reasonable papers would be very likely to get in. It would also decrease the value of the publication as such and would put the focus back on its content. Another positive effect I would expect is that people would not gamble by salami-slicing their work as they could be pretty sure a solid paper gets in and the value of a paper would be defined by its content and not by the fact that it is accepted.
Why enforce long discussions?
I am pretty sure what the answer to the following question to ACs would be: if you have only one trip you could do, would you go to the program committee meeting or to the actual conference? We did not ask this question in 2014 but I ask many people what value they see in the physical PC meeting. And there were very clear answers: overview of many contributions made, expert discussion of the contribution – especially the controversial ones, and getting a feel for what the community values.In contrast, going to CHI2016, UBICOMP2016, and UIST2016 I felt that the discussions were most often not existent or even embarrassing. In our prime venues we do not discuss the results that are presented. We do not engage with why people took a certain approach, we do not discuss the implications of the work, we do not discuss alternative approaches, we do not discuss limitations, and we do not discuss what value this contribution has for the community. Such discussions happen in other fields (I recently presented at a social science event and the discussion was very challanging but insightful). We have lost the discussion culture at our conferences.
By enforcing discussions (e.g. have the AC and 2AC present at the presentation) we would learn much more from the work people present and I would expect that some people would even be more careful about what they publish. If we do not challenge the research in discussions, we keep everyone in their bubble, having no real idea how the wider community receives and sees it. My suggestion would be to have talks of 15 minutes and 15 minutes of discussion. If no discussion happens the paper cannot be published. If the discussion reveals that the paper reports on major findings or even delivers a breakthrough the authors should be encouraged to extend it for a journal. We could even imagine that people at the discussion can non-anonymously either endorse or oppose the paper in the ACM DL.
Perception of Human Computer Interaction by others
I have a few prototypical comments below that illustrate how others view the way we publish. I have not received exactly these, but some with a similar meaning from colleagues from other fields:- “Interesting, you just present the work and there is no discussion? Maybe that is useful in your field as you can present more in the same time.”
- “It’s is exciting that you follow an agile publication strategy in your field. Each web questionnaire, prototype, pre-study, and study is presented in a separate publication in your top venue. How do you keep track of this?”
CHI as benchmark is too little
Whenever there is the discussion that we need to reform the CHI paper process people argue: this is essential in the US tenure track process and hence we cannot change it. And the solution is to create another (minor) venue that is removing some of the competition for the “serious” researchers. It seems CHI has managed to become a great benchmark for hiring and tenure, but its function as publication venue may be at risk – especially if people talk about the “CHI lottery”. As I said I disagree with this observation, but I think we should make an effort to:- Make the conference a predictable publication venue (e.g. senior people in the field would be able to place their own submission reliably into one of the three categories: clear reject, borderline, clear accept)
- Enforce discussion at the conference to avoid that researchers live in their bubble (where they and their friends believe it is good, but the community at large thinks it is not).