This quarter, our Community column is dedicated to the review process of ACM Multimedia (MM). We report the summary of discussions arisen at various points in time, after the first round of reviews were returned to authors.
The core part of the discussion focused on how to improve review quality for ACM MM. Some participants pointed out that there have been complaints about the level and usefulness of some reviews in recent editions of ACM Multimedia. The members of our discussion forums (Facebook and Linkedin) proposed some solutions.
A semi-automated paper assignment. Participants debated about the best way of assigning papers to reviewers. Some suggested that automated assignment, i.e. using TPMS, helps reducing biases at scale: this year MM followed the review model of CVPR, which handled 1,000+ submissions and peer reviews. Other participants observed that automated assignment systems often fail in matching papers with the right reviewers. This is mainly due to the diversity of the Multimedia field: even within a single area, there is a lot of diversity in expertise and methodologies. Some participants advocated that the best solution is to have two steps (1) a bidding period where reviewers choose their favorite papers based on the areas of expertise, or, alternatively, an automated assignment step; (2) an “expert assignment” period, where, based on the previous choices, Area Chairs select the right people for a paper: a reviewer pool with relevant complementary expertise.
The authors’ advocate. Most participants agreed that the figure of the author’s advocate is crucial for a fair reviewing process, especially for a diverse community such as the Multimedia community. Most participants agreed that the author’s advocate should be provided in all tracks.
Non-anonymity among reviewers. It was observed that revealing the identity of reviewers to the other members of the program committee (e.g. Area Chairs and other reviewers) could encourage responsiveness and commitments during the review and discussion periods.
Quality over quantity. It was pointed out that increasing the number of reviews per paper is not always the right solution. This adds workload on the reviewers, thus potentially decreasing the quality of their reviews.
Less frequent changes in review process. A few participants discussed about the frequency of changes in the review process in ACM MM. In recent years, the conference organizers have tried different review formats, often inspired by other communities. It was observed that this lack of continuity in the review process might not give the time to evaluate the success of a format, or to measure the quality of the conference overall. Moreover, changes should be communicated and announced well before implemented (and repeatedly because people tend to oversight them) to the authors and the reviewers.
This debate lead to a higher-level discussion about the identity of the MM community. Some participants interpreted these frequent changes in the review process as some kind of identity crisis. It was proposed to use empirical evidence (e. g. a community survey) to analyse exactly what the MM community actually is and how it should evaluate itself. The risk of becoming a second tier conference to CVPR was brought up: not only authors submit to MM rejected papers from CVPR, but also, at times, reviewers are assuming that the MM papers have to be reviewed as CVPR papers, thus potentially losing a lot of interesting papers for the conference.
We would like to thank all participants for their time and precious thoughts. As next step for this column, we might consider making short surveys about specific topics, including the ones discussed in this issue of the SIGMM Records opinion column.
We hope this column will foster fruitful discussions during the conference, which will be held in Seoul, Korea, on 22-26 October 2018.