Reviewing Guidelines

Reviewers are asked to help assess submissions in a way that both maximizes the quality of the ICONS conference as well as provides clear insight into the review process.  

Any subreviewers must be formally assigned in EasyChair.  The principal reviewer is responsible for follow-up questions and/or re-reviews that may be requested by the program committee chairs.

Unpublished manuscripts under review should not be submitted to generative AI systems during the review process.

To ensure fair reviewing assignment, all members of the ICONS Program Committee are required to register conflicts of interest in EasyChair prior to the finalization of review assignment. Potential conflicts of interest involve professional or personal relationships, such as self-authorship, recent collaborations, advisor/advisee connections, shared institutional affiliations, etc., that could compromise the impartiality of a reviewer for reviewing a particular paper.

Authors will be invited to provide a rebuttal of their reviews to the program committee chairs, but there will be no opportunity to submit a substantially modified revision.  As such, please provide a review of the manuscript as-is, not for what it could be after revision. 

A review will consist of the following responses:

  1. Summary — Please recap the paper as you understand it.  Please include the main idea and the authors’ primary claims.
  2. Strengths — Please list the positive aspects of the submission.
  3. Weaknesses — Please list aspects of the submission that could be better
  4. Questions — Please list separately questions the reviewer has that do not have a positive or negative connotation

Additionally, reviewers will rank the papers on the following criteria.  Compared to all papers in the field regardless of format

  1. Significance (One of the least significant, Less significant than average, average significance, more significant than average, one of the most significant)
  2. Novelty (One of the least novel, Less novel than average, average novelty, more novel than average, one of the most novel)
  3. Scientific Validity (Major concerns on scientific validity, some concerns on scientific validity, average scientific validity, robust scientific validity, extremely robust scientific validity)

Lastly, reviewers will provide answers to the following:

  1. Is the submission relevant and interesting to the audience of ICONS? (yes/no)
  2. Does the submission uphold manuscript standards in English, presentation, and length? (yes/no)
  3. Review Confidence  (Not confident, confident, exceptionally confident)

Papers submitted to ICONS should be reviewed based on the following three criteria:

  • Significance: This refers to the importance and relevance of the research within the field of neuromorphic computing. Key questions include: Does the paper address a significant or challenging problem in neuromorphic computing? Will this work influence or inspire new research directions? Does it have potential for real-world applications or technological advancements? A paper of high significance would address a critical problem, offer major contributions, or open new research directions. A paper of low significance would address a trivial problem, make minimal/incremental contributions, or have low relevance to neuromorphic computing.
  • Novelty: This refers to the originality of the work compared to prior research. Key questions include: Does the paper present genuinely new concepts, methods, or findings? Is its unique contribution clearly differentiated from existing work? Does it propose innovative solutions or significantly advance the state of the art? A paper having high novelty would introduce truly original ideas or breakthroughs. Low novelty papers would offer only minor, obvious, or incremental contributions.
  • Scientific Validity: This criterion covers the soundness of the scientific methodology, correctness of results, and clarity of presentation. Key questions include: Are the methods, experiments, or theories appropriate and correctly applied? Are the results accurate, well-supported, and consistently presented? Is the work sufficiently detailed for others to understand and potentially verify? Is the paper well-organized, clearly written, and easy to understand? Papers scoring high on this criterion would comprise a sound methodology, robust results, and clear, rigorous presentation. Papers scoring low on this criterion would show flaws in the scientific methodology, make unsupported claims, or have poor presentation.

Review Process

  1. Papers will be split by ‘track’ according to the best-fit high-level focus area (Systems, Algorithms, Applications, Software).  PC members will self-identify which areas they are interested in reviewing.
  2. Papers will be assigned PC members for review. (April 22, 2026)
  3. PC members deliver reviews. (May 13, 2026)
  4. LLM provides a review of the reviews
  5. PC computes
    1. PaperScore_i = (Significance+Novelty+ScientificValidity)RelevanceManuscriptStandards( \text{Significance} + \text{Novelty} + \text{ScientificValidity} ) \cdot \text{Relevance} \cdot \text{ManuscriptStandards}
    2. OverallScore = iPaperScoreiN\frac{\sum_i \text{PaperScore}_i}{N}
    3. Accept top papers; reject lowest papers
    4. Flags papers for manual review under these criteria
      1. var_i(ReviewerScore_i) is high  — In this case, PC chairs will compare the paper relative to other scores assigned by the reviewers
      2. LLM review of reviews is poor
      3. Flagged papers are either assessed by PC chairs or asked for re-review
  6. Initial decisions are sent to authors (May 20, 2026)
  7. Rebuttal period (1-2 weeks?)
  8. PC Chairs assess rebuttals of scores on the bubble and flagged papers (June 3, 2026)