Our team has found two important issues related to the review process. Mainly, the review of other competitor submissions.
From the mailing list:
== Review Process ==
For the first time, teams who are applying to participate in the Humanoid Soccer Competition will be asked to peer-review the submission of two other teams from within their own sub-league. Every team will thus receive five reviews in total this year: Two reviews from other teams, two from Technical Committee members and one meta-review.
Participation in the peer-review process is obligatory and the failure to provide the required reviews on time will be negatively considered in the teams own application. You will receive more information about this process during the application process.
The first, lesser issue is the sole fact that teams come to RoboCup to compete and organisation-related duties such as being involved in qualifying other teams, should be outside of our scope of responsibilities.
The second and quite big issue is a conflict of interest. Who is to say which team is qualified and will be objective in judging other team submissions? In such a situation it is not hard to have several conflicting situations, where the experience of a team leads to under- or over-estimating the capabilities of another team which might result in their wrong qualifying or non-qualifying. We understand that the TC still has the final say - and due to this fact, it is not hard to imagine the TC completely overwriting a unanimous decision of the reviewers due to “other” reasons, which also removes the point of this distributed review process.
We can imagine that this rule proposal is supposed to mimic the peer-review process of scientific papers, there is however quite a large difference, as this is a competition. We would like to propose to completely remove this proposal, as even voluntary reviews can be biased.