Concerns Regarding the Effectiveness of Peer Review in Crunch 3

Hi,

I would appreciate it if you could provide a rationale for why the peer review evaluation in Crunch 3 should be considered reliable and effective. I have two primary concerns:

  1. Requirement for Expert-Level Knowledge: The peer review process appears to assume that participants possess expert-level knowledge, particularly in the development of gene panels and the related algorithmic processes. Given that Crunch 3 is open to a wide range of participants, many of whom may not have specialized expertise in these areas, it seems unrealistic to expect that all reviewers will have the necessary background to evaluate the work accurately. How can we ensure that non-experts are able to provide meaningful feedback, especially when the tasks demand a deep understanding of technical and scientific concepts?
  2. Conflict of Interest: There seems to be a built-in conflict of interest in the peer review scoring system. If participants are incentivized to rate others highly to receive positive feedback themselves, there may be little motivation to provide honest or critical assessments. This creates a potential for inflated scores that do not accurately reflect the quality of the work. Additionally, if ranking in the system is directly affected by peer review scores, what prevents participants from strategically downgrading others’ work to improve their own ranking?
1 Like

Very well put @many-kalin . We have similar doubts about the peer review part and wondering if there is a quantitative way to rank submissions.

I think those are good points, but also, most participants in the latest milestone already have some background or familiarity with this field. Incorporating peer review alongside our evaluation of top discriminative genes would actually complement our methodology by adding qualitative insights rather than undermining it.

@raghvendramall They mentioned they are going to use a quantitative and objective approach to rank submissions based on my understanding.

Quoting from broad 3 page:

“Classification Accuracy: We’ll use your top 50 genes to train a model that distinguishes between dysplasia and noncancerous mucosa. The better your genes help the model correctly identify these regions, the higher your accuracy score will be. This is the main factor in determining your ranking.”