Review Process

Our team has found two important issues related to the review process. Mainly, the review of other competitor submissions.

From the mailing list:

== Review Process ==
For the first time, teams who are applying to participate in the Humanoid Soccer Competition will be asked to peer-review the submission of two other teams from within their own sub-league. Every team will thus receive five reviews in total this year: Two reviews from other teams, two from Technical Committee members and one meta-review.

Participation in the peer-review process is obligatory and the failure to provide the required reviews on time will be negatively considered in the teams own application. You will receive more information about this process during the application process.

The first, lesser issue is the sole fact that teams come to RoboCup to compete and organisation-related duties such as being involved in qualifying other teams, should be outside of our scope of responsibilities.

The second and quite big issue is a conflict of interest. Who is to say which team is qualified and will be objective in judging other team submissions? In such a situation it is not hard to have several conflicting situations, where the experience of a team leads to under- or over-estimating the capabilities of another team which might result in their wrong qualifying or non-qualifying. We understand that the TC still has the final say - and due to this fact, it is not hard to imagine the TC completely overwriting a unanimous decision of the reviewers due to “other” reasons, which also removes the point of this distributed review process.

We can imagine that this rule proposal is supposed to mimic the peer-review process of scientific papers, there is however quite a large difference, as this is a competition. We would like to propose to completely remove this proposal, as even voluntary reviews can be biased.

Best Regards,
Team NimbRo

Dear Team NimbRo,

First, for the lesser issue, RoboCup is a competition where involvement of the participants at every stage is crucial. One of the aim of this process is to share the burden of reviewing. With the current implementation, each team will need to carry out two reviews. The task of reviewing 2 TDP for a whole team is definitely something which should be possible to handle.

Regarding the confict of interest, this is part of the reason why we share the reviewers, half from TC and half from Teams. Of course, reviewing is always subjective and we are all aware that very different opinions can come up for the same application (even for the peer-reviewing of scientific papers).

First, there will be the possibility of declaring conflict of interests to avoid reviewing other teams.
Second, if we feel that a team is intentionally under/over estimating the result, we will definitely discuss this with the team.
Third, litigious cases will be examinated by the TC (high variance in case/limit between qualified and non-qualified).

If the TC overwrites the decision from the teams, it does not removes the point of the distributed review process, it will still bring more heterogeneity in the comments and all the unanimous cases will have been solved by members from the league.

Finally, if there is distrust toward the TC/EC from some teams, I would welcome them to express their concern toward the trustees and to require a major change in the composition of this team. However, in this case, I do sincerely hope that the teams who complain about the acting of TC/EC will produce candidates for the next election.

Best regards,

Ludovic Hofer

I would like to highlight that RoboCup, as do many other organizations, conferences and journals, ultimately rely on the involvement of the scientific community. For example, I don’t think many journals or conferences exist that only rely on elected committee members to handle the entire review process. In the past years, we have seen a couple of problems with this approach. First and foremost, due to the sheer number of submissions and a TC with currently only 7 members, it is not possible to provide high-quality and detailed reviews for all teams. This makes it difficult for teams to improve their qualification material. By sharing the work between the TC and the community, we hope to increase the overall quality of the application process, but also to make it more transparent to the teams. In the end, we hope that everyone will benefit from this.

I think RoboCup and ordinary conferences are quite comparable. In ordinary conferences, many of the reviewers submit papers themselves, thus their own paper directly competes with the ones being reviewed. And similar to ordinary conferences, in case of doubt or concern, the TC (or program committee) will make the final decision whether a team is qualified or not. If factors other than the qualification material would overwrite a decision (such as forfeit in the previous tournament), those submissions are desk-rejected anyway and will not be send out to reviewers. Thus, the reviews by each team will matter as much as the review by the TC members.