Also, please note that the content of any submission to ACL 2018, and the participants in and content of discussion on submissions, is confidential.
Evaluation Category Enter Your Score Consent to Use Reviews for ResearchIn coordination with the NAACL HLT 2018 program co-chairs, we plan to do some analytics on anonymized reviews and rebuttal statements, with the consent of the reviewers and authors. Our purpose is to improve the quality of the review process. The data will be compiled into a unique corpus for NLP, and will be made available to the research community after appropriate anonymization checks, at the earliest in 2 years after ACL 2018. We hope to provide data on "how to review" to younger researchers, and improve the transparency of the reviewing process in general. By default, you agree that your anonymised review can be freely used for research purposes and published under an appropriate open-source license no earlier than 2 years from ACL 2018. Check "No" if you would like to opt out of the data collection. Yes No
In coordination with the NAACL HLT 2018 program co-chairs, we plan to do some analytics on anonymized reviews and rebuttal statements, with the consent of the reviewers and authors. Our purpose is to improve the quality of the review process. The data will be compiled into a unique corpus for NLP, and will be made available to the research community after appropriate anonymization checks, at the earliest in 2 years after ACL 2018. We hope to provide data on "how to review" to younger researchers, and improve the transparency of the reviewing process in general. By default, you agree that your anonymised review can be freely used for research purposes and published under an appropriate open-source license no earlier than 2 years from ACL 2018. Check "No" if you would like to opt out of the data collection.
By default, you agree that your anonymised review can be freely used for research purposes and published under an appropriate open-source license no earlier than 2 years from ACL 2018.
Check "No" if you would like to opt out of the data collection.
Evaluation Category Enter Your Score AppropriatenessIs this submission appropriate as a research paper for ACL 2018? ACL 2018 has a goal of a broad technical program. Relevant and appropriate submissions describe substantial, original, and unpublished research in any area of computational linguistics. Both empirical and theoretical results are welcome; see the Call for Papers. Appropriate as a research paper for ACL 2018 (most submissions). Marginally appropriate - belongs in another track (long or short paper submissions, demos) or another conference. Inappropriate. -- select -- Appropriate Marginally appropriate Inappropriate Adhere to ACL 2018 GuidelinesDoes the submission adhere to the ACL 2018 format and style guidelines? In particular, does the submission use the official style template? Is it anonymized properly? Does the main content fit in the page limit (8 pages for long papers, 4 pages for short papers)? -- select -- Yes No Adhere to ACL Author GuidelinesDoes the submission adhere to the ACL author guidelines (https://www.aclweb.org/adminwiki/index.php?title=ACL_Author_Guidelines)? Select "Yes" if you found no problem. Select "No" if you identified a problem (e.g. found an online non-anonymized version uploaded/updated in the anonymity period, a suspicious plagiarism case, etc.). -- select -- Yes No Handling of Data / ResourcesDoes the submission document the appropriate handling of data, software and other resources, including licensing and citation, data protection and research review, as appropriate? Select N/A if this question does not apply. -- select -- N/A Yes No Handling of Human ParticipantsIf the work involves (data from) human participants, are methods for documenting informed consent and protecting participant anonymity described in the submission? Select N/A if this question does not apply. -- select -- N/A Yes No
Is this submission appropriate as a research paper for ACL 2018? ACL 2018 has a goal of a broad technical program. Relevant and appropriate submissions describe substantial, original, and unpublished research in any area of computational linguistics. Both empirical and theoretical results are welcome; see the Call for Papers. Appropriate as a research paper for ACL 2018 (most submissions). Marginally appropriate - belongs in another track (long or short paper submissions, demos) or another conference. Inappropriate.
ACL 2018 has a goal of a broad technical program. Relevant and appropriate submissions describe substantial, original, and unpublished research in any area of computational linguistics. Both empirical and theoretical results are welcome; see the Call for Papers.
Does the submission adhere to the ACL 2018 format and style guidelines? In particular, does the submission use the official style template? Is it anonymized properly? Does the main content fit in the page limit (8 pages for long papers, 4 pages for short papers)?
Does the submission adhere to the ACL author guidelines (https://www.aclweb.org/adminwiki/index.php?title=ACL_Author_Guidelines)? Select "Yes" if you found no problem. Select "No" if you identified a problem (e.g. found an online non-anonymized version uploaded/updated in the anonymity period, a suspicious plagiarism case, etc.).
Does the submission document the appropriate handling of data, software and other resources, including licensing and citation, data protection and research review, as appropriate? Select N/A if this question does not apply.
If the work involves (data from) human participants, are methods for documenting informed consent and protecting participant anonymity described in the submission? Select N/A if this question does not apply.
If you determined that the submission is marginally appropriate or inappropriate, or your response to either of the above questions is "No", provide a brief justification here. If the submission is clearly inappropriate, please notify the area chair via email and skip to the end of the review form.
Describe a summary of the paper, and list the main contributions claimed for the work in this submission, in the order of strength (primary contributions should be presented first).
Describe the contributions of this work as you see them, not as the authors see them. For example, the authors may think their method is a key contribution; however, you may think the method lacks novelty, but the data and evaluation are significant contributions. We recommend to give between 1 and 3 contributions.
Summary: Contribution 1: Contribution 2: Contribution 3:
What are your strongest arguments supporting the acceptance of this submission?
For each argument you give, please provide detailed explanations and/or evidences supporting your argument, so as to facilitate the area chairs to evaluate the significance of the submission. To trade off between thoroughness and compactness, we recommend to give between 3 and 5 arguments, ordered by the arguments importance (primary arguments should be presented first).
Strength argument 1: Strength argument 2: Strength argument 3: Strength argument 4: Strength argument 5:
What are your strongest arguments against the acceptance of this submission?
Note that the authors are supposed to reply to your weakness arguments during the author response period. For each argument you give, if applicable, please provide detailed explanations and/or evidences supporting your argument, so as to facilitate the authors to reply. To trade off between thoroughness and compactness, we recommend to give between 3 and 5 arguments, ordered by the arguments importance (primary arguments should be presented first).
Weakness argument 1: Weakness argument 2: Weakness argument 3: Weakness argument 4: Weakness argument 5:
Ask questions that have to be addressed during the author response period to clarify contributions of this submission. Minor questions that need not be answered in the author response should be put in "Additional Comments" below. Describe any number of questions as you think necessary.
Question 1: Question 2: Question 3:
Evaluation Category Enter Your Score NLP Tasks / ApplicationsDo the contributions of the submission include NLP tasks or applications? If so, how interesting or impactful might these tasks/applications be to the ACL community? A new NLP task might be a computational approach to a previously unstudied linguistic phenomenon. A new NLP application is a previously undescribed way to use NLP. N/A Marginal contribution Moderate contribution Strong contribution Methods / AlgorithmsDo the contributions of the submission include original methods or algorithms for an existing or new task or application? If so, how useful might the methods/algorithms be to the ACL community? N/A Marginal contribution Moderate contribution Strong contribution Theoretical / Algorithmic ResultsDo the contributions of the submission include theoretical or algorithmic results? If so, how interesting or impactful might these theoretical / algorithmic results be to the ACL community? A theoretical result may be fundamentally linguistic (e.g. a description or critique of an approach to syntax), or fundamentally computational (e.g. bounds on the performance of a method). An algorithmic result may be a formalizations of a machine learning or other algorithm pertinent to NLP. N/A Marginal contribution Moderate contribution Strong contribution Empirical ResultsDo the contributions of the submission include empirical results? If so, how interesting or impactful might these empirical results be to the ACL community? An empirical result may include a corpus study, an evaluation or a controlled experiment done to test a hypothesis (including negative results). Presenting state-of-the-art performance is not necessarily a sufficient contribution as empirical results. For submissions presenting empirical results, the authors should clearly describe the hypothesis/es being tested, and provide proper methods and analyses for testing the hypothesis/es. N/A Marginal contribution Moderate contribution Strong contribution Data / ResourcesDo the contributions of the submission include data sets or resources? If so, how useful might the data/resources be to the ACL community? A data set or resource may include a new corpus, new annotations on an existing corpus, a new knowledge base, a new language resource, etc. The data set or resource need not necessarily be one provided to the research community – for example, if it is proprietary or contains private data – although to the extent possible, researchers are encouraged to share data and resources in the interests of reproducible science. N/A Marginal contribution Moderate contribution Strong contribution Software / SystemsDo the contributions of this work include software / systems? If so, how useful might the software/systems be to the ACL community? The software or system need not necessarily be provided with the submission – for example, if it is proprietary – although to the extent possible, researchers are encouraged to share software and systems in the interests of reproducible science. N/A Marginal contribution Moderate contribution Strong contribution Evaluation Methods / MetricsDo the contributions of this work include evaluation methods / metrics? If so, how useful might the evaluation methods/metrics be to the ACL community? If this is a contribution of the submission, it should be thoroughly motivated and described in the submission, and if possible a reference implementation should be provided. N/A Marginal contribution Moderate contribution Strong contribution Other ContributionsDo the contributions of this work include anything that cannot be classified into the above types? Examples of this type include, but not limited to, survey papers, replication studies, or opinion pieces. If so, how interesting or impactful might this contribution be to the ACL community? N/A Marginal contribution Moderate contribution Strong contribution
Do the contributions of the submission include NLP tasks or applications? If so, how interesting or impactful might these tasks/applications be to the ACL community? A new NLP task might be a computational approach to a previously unstudied linguistic phenomenon. A new NLP application is a previously undescribed way to use NLP.
A new NLP task might be a computational approach to a previously unstudied linguistic phenomenon. A new NLP application is a previously undescribed way to use NLP.
Do the contributions of the submission include original methods or algorithms for an existing or new task or application? If so, how useful might the methods/algorithms be to the ACL community?
Do the contributions of the submission include theoretical or algorithmic results? If so, how interesting or impactful might these theoretical / algorithmic results be to the ACL community? A theoretical result may be fundamentally linguistic (e.g. a description or critique of an approach to syntax), or fundamentally computational (e.g. bounds on the performance of a method). An algorithmic result may be a formalizations of a machine learning or other algorithm pertinent to NLP.
A theoretical result may be fundamentally linguistic (e.g. a description or critique of an approach to syntax), or fundamentally computational (e.g. bounds on the performance of a method). An algorithmic result may be a formalizations of a machine learning or other algorithm pertinent to NLP.
Do the contributions of the submission include empirical results? If so, how interesting or impactful might these empirical results be to the ACL community? An empirical result may include a corpus study, an evaluation or a controlled experiment done to test a hypothesis (including negative results). Presenting state-of-the-art performance is not necessarily a sufficient contribution as empirical results. For submissions presenting empirical results, the authors should clearly describe the hypothesis/es being tested, and provide proper methods and analyses for testing the hypothesis/es.
An empirical result may include a corpus study, an evaluation or a controlled experiment done to test a hypothesis (including negative results). Presenting state-of-the-art performance is not necessarily a sufficient contribution as empirical results. For submissions presenting empirical results, the authors should clearly describe the hypothesis/es being tested, and provide proper methods and analyses for testing the hypothesis/es.
Do the contributions of the submission include data sets or resources? If so, how useful might the data/resources be to the ACL community? A data set or resource may include a new corpus, new annotations on an existing corpus, a new knowledge base, a new language resource, etc. The data set or resource need not necessarily be one provided to the research community – for example, if it is proprietary or contains private data – although to the extent possible, researchers are encouraged to share data and resources in the interests of reproducible science.
A data set or resource may include a new corpus, new annotations on an existing corpus, a new knowledge base, a new language resource, etc. The data set or resource need not necessarily be one provided to the research community – for example, if it is proprietary or contains private data – although to the extent possible, researchers are encouraged to share data and resources in the interests of reproducible science.
Do the contributions of this work include software / systems? If so, how useful might the software/systems be to the ACL community? The software or system need not necessarily be provided with the submission – for example, if it is proprietary – although to the extent possible, researchers are encouraged to share software and systems in the interests of reproducible science.
The software or system need not necessarily be provided with the submission – for example, if it is proprietary – although to the extent possible, researchers are encouraged to share software and systems in the interests of reproducible science.
Do the contributions of this work include evaluation methods / metrics? If so, how useful might the evaluation methods/metrics be to the ACL community? If this is a contribution of the submission, it should be thoroughly motivated and described in the submission, and if possible a reference implementation should be provided.
If this is a contribution of the submission, it should be thoroughly motivated and described in the submission, and if possible a reference implementation should be provided.
Do the contributions of this work include anything that cannot be classified into the above types? Examples of this type include, but not limited to, survey papers, replication studies, or opinion pieces. If so, how interesting or impactful might this contribution be to the ACL community?
Evaluation Category Enter Your Score Originality (1-5)Considering your responses to the questions above, rate the originality of the work described in the submission. 5 = Innovative: Highly original and significant new research topic, technique, methodology, or insight. 4 = Creative: An intriguing problem, technique, or approach that is substantially different from previous research. 3 = Respectable: A nice research contribution that represents a notable extension of prior approaches or methodologies. 2 = Uninspiring: Obvious, or a minor improvement on familiar techniques. 1 = Significant portions have actually been done before or done better. -- select -- 1 2 3 4 5 Soundness/Correctness (1-5)Considering your responses to the questions above, rate the soundness of the work described in the submission. 5 = The approach is sound, and the claims are convincingly supported. 4 = Generally solid, but there are some aspects of the approach or evaluation I am not sure about. 3 = Fairly reasonable, but the main claims cannot be accepted based on the material provided. 2 = Troublesome. Some interesting ideas, but the work needs better justification or evaluation. 1 = Fatally flawed. -- select -- 1 2 3 4 5 Substance (1-5)Considering your responses to the questions above, rate the completeness and substance of the work described in the submission. 5 = Contains more ideas or results than most publications of this length at ACL. 4 = Represents an appropriate amount of content for an ACL paper of this length (most submissions). 3 = Leaves open one or two natural questions that could have been pursued within the paper. 2 = Work in progress. There are enough good ideas, but perhaps not enough results yet. 1 = Seems thin. Not enough ideas here. -- select -- 1 2 3 4 5 Replicability (1-5)Considering your responses to the questions above, rate the reproducibility of the work described in the submission. Members of the ACL community... 5 = could easily reproduce the results and verify the correctness of the results described here. Useful supporting dataset and/or software was provided. 4 = could mostly reproduce the results described here, maybe by substituting public data for proprietary data. 3 = could possibly reproduce the results described here with some difficulty. The settings of parameters are underspecified or very subjectively determined. 2 = could not reproduce the results described here no matter how hard they tried. 1 = not applicable (please use this very sparingly, such as for opinion pieces or applications). -- select -- 1 2 3 4 5 Meaningful Comparison (1-5)Does the discussion of related and prior work motivate and support the main claims of the submission in an appropriate scholarly manner? Is it complete? Does the discussion of related and prior work adhere to the ACL author guidelines on citation (https://www.aclweb.org/adminwiki/index.php?title=ACL_Author_Guidelines)? If you feel references are incomplete, be sure to include the relevant references in your comments. 5 = Comparison to prior work is superbly carried out given the space constraints. 4 = Comparisons are mostly solid, but there are some missing references. 3 = Comparisons are weak, very hard to determine how it compares to previous work. 2 = Only partial awareness or understanding of related work, or a flawed empirical comparison. 1 = Little awareness of related work, or lacks necessary empirical comparison. -- select -- 1 2 3 4 5 Readability (1-5)For a reasonably well-prepared reader, is it clear what was done and why? Is the paper well-written and well-structured? 5 = Very clear. 4 = Understandable by most readers. 3 = Mostly understandable with some effort. 2 = Important questions were hard to resolve even with effort. 1 = Much of the paper is confusing. -- select -- 1 2 3 4 5
Considering your responses to the questions above, rate the originality of the work described in the submission. 5 = Innovative: Highly original and significant new research topic, technique, methodology, or insight. 4 = Creative: An intriguing problem, technique, or approach that is substantially different from previous research. 3 = Respectable: A nice research contribution that represents a notable extension of prior approaches or methodologies. 2 = Uninspiring: Obvious, or a minor improvement on familiar techniques. 1 = Significant portions have actually been done before or done better.
Considering your responses to the questions above, rate the soundness of the work described in the submission. 5 = The approach is sound, and the claims are convincingly supported. 4 = Generally solid, but there are some aspects of the approach or evaluation I am not sure about. 3 = Fairly reasonable, but the main claims cannot be accepted based on the material provided. 2 = Troublesome. Some interesting ideas, but the work needs better justification or evaluation. 1 = Fatally flawed.
Considering your responses to the questions above, rate the completeness and substance of the work described in the submission. 5 = Contains more ideas or results than most publications of this length at ACL. 4 = Represents an appropriate amount of content for an ACL paper of this length (most submissions). 3 = Leaves open one or two natural questions that could have been pursued within the paper. 2 = Work in progress. There are enough good ideas, but perhaps not enough results yet. 1 = Seems thin. Not enough ideas here.
Considering your responses to the questions above, rate the reproducibility of the work described in the submission. Members of the ACL community... 5 = could easily reproduce the results and verify the correctness of the results described here. Useful supporting dataset and/or software was provided. 4 = could mostly reproduce the results described here, maybe by substituting public data for proprietary data. 3 = could possibly reproduce the results described here with some difficulty. The settings of parameters are underspecified or very subjectively determined. 2 = could not reproduce the results described here no matter how hard they tried. 1 = not applicable (please use this very sparingly, such as for opinion pieces or applications).
Does the discussion of related and prior work motivate and support the main claims of the submission in an appropriate scholarly manner? Is it complete? Does the discussion of related and prior work adhere to the ACL author guidelines on citation (https://www.aclweb.org/adminwiki/index.php?title=ACL_Author_Guidelines)? If you feel references are incomplete, be sure to include the relevant references in your comments. 5 = Comparison to prior work is superbly carried out given the space constraints. 4 = Comparisons are mostly solid, but there are some missing references. 3 = Comparisons are weak, very hard to determine how it compares to previous work. 2 = Only partial awareness or understanding of related work, or a flawed empirical comparison. 1 = Little awareness of related work, or lacks necessary empirical comparison.
For a reasonably well-prepared reader, is it clear what was done and why? Is the paper well-written and well-structured? 5 = Very clear. 4 = Understandable by most readers. 3 = Mostly understandable with some effort. 2 = Important questions were hard to resolve even with effort. 1 = Much of the paper is confusing.
Evaluation Category Enter Your Score Overall Score (1-6)Based on your review of this submission, should it be accepted to ACL 2018? In deciding on your ultimate recommendation, please think over all your responses above. We want a conference full of creative, original, sound and timely work. Prefer work that is inventive and will stimulate new approaches over work that is solid but incremental. Remember also that the author has about a month to address reviewer comments before the camera-ready deadline. 6 = Transformative: This paper is likely to change our field. Give this score exceptionally for papers worth best paper consideration. 5 = Exciting: The work presented in this submission includes original, creative contributions, the methods are solid, and the paper is well written. 4 = Interesting: The work described in this submission is original and basically sound, but there are a few problems with the method or paper. 3 = Uninspiring: The work in this submission lacks creativity, originality, or insights. I'm ambivalent about this one. 2 = Borderline: This submission has some merits but there are significant issues with respect to originality, soundness, replicability or substance, readability, etc. 1 = Poor: I cannot find any reason for this submission to be accepted. -- select -- 1 2 3 4 5 6 Reviewer Confidence (1-5)How confident are you about your review? 5 = Positive that my evaluation is correct. I read the paper very carefully and I am very familiar with related work. 4 = Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. 3 = Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. 2 = Willing to defend my evaluation, but it is fairly likely that I missed some details, didn't understand some central points, or can't be sure about the novelty of the work. 1 = Not my area, or paper was hard for me to understand. My evaluation is just an educated guess. -- select -- 1 2 3 4 5 Request a Meta-reviewWe aim for a balance at ACL of papers that meet our high standards (but are sometimes incremental) and potentially influential work even if it has flaws. If you consider this work is potentially influential and would like this paper to be presented at ACL despite its flaws, please check Yes. A meta-reviewer will then evaluate it for possible inclusion in the program even if it does not meet all of the standard reviewing criteria. -- select -- No Yes Presentation FormatPapers at ACL can be presented either as poster or as oral presentation, depending on what is most likely to be beneficial to convey its ideas to its audience. If this paper were selected for presentation, which form of presentation would you find more appropriate? Note that the decisions as to which papers will be presented orally and which as poster presentations will be based on the nature rather than on the quality of the work. There will be no distinction in the proceedings between papers presented orally and those presented as poster presentations. -- select -- Unsure Poster Oral Recommendation for Best Paper ConsiderationChoose 'Yes' to indicate that this paper is likely to be a "top 5%" paper at ACL 2018 (i.e. top 1.25% among all submissions). -- select -- No Yes Reviewer GuidelinesBy checking "Yes", you certify that you have followed the ACL reviewer guidelines (https://www.aclweb.org/adminwiki/index.php?title=ACL_Reviewer_Guidelines). No Yes
Based on your review of this submission, should it be accepted to ACL 2018? In deciding on your ultimate recommendation, please think over all your responses above. We want a conference full of creative, original, sound and timely work. Prefer work that is inventive and will stimulate new approaches over work that is solid but incremental. Remember also that the author has about a month to address reviewer comments before the camera-ready deadline. 6 = Transformative: This paper is likely to change our field. Give this score exceptionally for papers worth best paper consideration. 5 = Exciting: The work presented in this submission includes original, creative contributions, the methods are solid, and the paper is well written. 4 = Interesting: The work described in this submission is original and basically sound, but there are a few problems with the method or paper. 3 = Uninspiring: The work in this submission lacks creativity, originality, or insights. I'm ambivalent about this one. 2 = Borderline: This submission has some merits but there are significant issues with respect to originality, soundness, replicability or substance, readability, etc. 1 = Poor: I cannot find any reason for this submission to be accepted.
How confident are you about your review? 5 = Positive that my evaluation is correct. I read the paper very carefully and I am very familiar with related work. 4 = Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. 3 = Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. 2 = Willing to defend my evaluation, but it is fairly likely that I missed some details, didn't understand some central points, or can't be sure about the novelty of the work. 1 = Not my area, or paper was hard for me to understand. My evaluation is just an educated guess.
We aim for a balance at ACL of papers that meet our high standards (but are sometimes incremental) and potentially influential work even if it has flaws. If you consider this work is potentially influential and would like this paper to be presented at ACL despite its flaws, please check Yes. A meta-reviewer will then evaluate it for possible inclusion in the program even if it does not meet all of the standard reviewing criteria.
Papers at ACL can be presented either as poster or as oral presentation, depending on what is most likely to be beneficial to convey its ideas to its audience. If this paper were selected for presentation, which form of presentation would you find more appropriate? Note that the decisions as to which papers will be presented orally and which as poster presentations will be based on the nature rather than on the quality of the work. There will be no distinction in the proceedings between papers presented orally and those presented as poster presentations.
Choose 'Yes' to indicate that this paper is likely to be a "top 5%" paper at ACL 2018 (i.e. top 1.25% among all submissions).
By checking "Yes", you certify that you have followed the ACL reviewer guidelines (https://www.aclweb.org/adminwiki/index.php?title=ACL_Reviewer_Guidelines).
Provide additional comments to augment your review if you have any; e.g. minor questions, constructive feedbacks, suggestions for extending the work, or editorial comments.
Provide any confidential comments to the program committee here (these will not be shown to the authors).
Evaluation Category Enter Your Score Authors IdentityDo you think you could identify authors of this paper? If so, how? How did it affect your reviewing? I have no idea who the authors are. I could guess, but the paper itself doesn’t communicate who the authors are. Sure, I know who the authors are because I’ve seen this work as a preprint, etc., or the paper reveals it. -- select -- I have no idea I could guess I know who the authors are
Do you think you could identify authors of this paper? If so, how? How did it affect your reviewing? I have no idea who the authors are. I could guess, but the paper itself doesn’t communicate who the authors are. Sure, I know who the authors are because I’ve seen this work as a preprint, etc., or the paper reveals it.
If you think you could identify the authors, how were you able to do so? How did it affect your review?
The reason why you could identify the authors: How your knowledge of the authors affected your review: Author names you identified:
START Conference Manager (V2.61.0 - Rev. 5130)