IMPACT SCORE JOURNAL RANKING CONFERENCE RANKING Conferences Journals Workshops Seminars SYMPOSIUMS MEETINGS BLOG LaTeX 5G Tutorial Free Tools
Eval4NLP 2023 : 4th Workshop on Evaluation and Comparison for NLP systems
Eval4NLP 2023 : 4th Workshop on Evaluation and Comparison for NLP systems

Eval4NLP 2023 : 4th Workshop on Evaluation and Comparison for NLP systems

Bali, Indonesia
Event Date: November 01, 2023 - November 01, 2023
Submission Deadline: August 25, 2023
Notification of Acceptance: October 02, 2023
Camera Ready Version Due: October 15, 2023




Call for Papers

the 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2023 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2023), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.

** Important Dates **

All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”).

- Direct submission to Eval4NLP deadline: August 25
- Submission of pre-reviewed papers to Eval4NLP (see below for details) : September 25
- Notification of acceptance: October 2
- Camera-ready papers due: October 15
- Workshop day: November 1

Please see the Call for Papers for more details [1].

** Shared Task **

This year’s version will come with a shared task on explainable evaluation of generated language (MT and summarization) with a focus on LLM prompts. Please find more information on the shared task page: [2].

** Topics **

Designing evaluation metrics: Proposing and/or analyzing metrics with desirable properties, e.g., high correlations with human judgments, strong in distinguishing high-quality outputs from mediocre and low-quality outputs, robust across lengths of input and output sequences, efficient to run, etc.; Reference-free evaluation metrics, which only require source text(s) and system predictions; Cross-domain metrics, which can reliably and robustly measure the quality of system outputs from heterogeneous modalities (e.g., image and speech), different genres (e.g., newspapers, Wikipedia articles and scientific papers) and different languages; Cost-effective methods for eliciting high-quality manual annotations; and Methods and metrics for evaluating interpretability and explanations of NLP models

Creating adequate evaluation data: Proposing new datasets or analyzing existing ones by studying their coverage and diversity, e.g., size of the corpus, covered phenomena, representativeness of samples, distribution of sample types, variability among data sources, eras, and genres; and Quality of annotations, e.g., consistency of annotations, inter-rater agreement, and bias check

Reporting correct results: Ensuring and reporting statistics for the trustworthiness of results, e.g., via appropriate significance tests, and reporting of score distributions rather than single-point estimates, to avoid chance findings; reproducibility of experiments, e.g., quantifying the reproducibility of papers and issuing reproducibility guidelines; and Comprehensive and unbiased error analyses and case studies, avoiding cherry-picking and sampling bias.

** Submission Guidelines **

The workshop welcomes two types of submission -- long and short papers. Long papers may consist of up to 8 pages of content, plus unlimited pages of references. Short papers may consist of up to 4 pages of content, plus unlimited pages of references. Please follow the ACL ARR formatting requirements, using the official templates [3]. Final versions of both submission types will be given one additional page of content for addressing reviewers’ comments. The accepted papers will appear in the workshop proceedings. The review process is double-blind. Therefore, no author information should be included in the papers and the (optional) supplementary materials. Self-references that reveal the author's identity must be avoided. Papers that do not conform to these requirements will be rejected without review.

** The submission sites on Openreview **

Standard submissions: [4]
Pre-reviewed submissions: [5]

See below for more information on the two submission modes.

** Two submission modes: standard and pre-reviewed **

Eval4NLP features two modes of submissions. Standard submissions: We invite the submission of papers that will receive up to three double-blind reviews from the Eval4NLP committee, and a final verdict from the workshop chairs. Pre-reviewed: To a later deadline, we invite unpublished papers that have already been reviewed, either through ACL ARR, or recent AACL/EACL/ACL/EMNLP/COLING venues (these papers will not receive new reviews but will be judged together with their reviews via a meta-review; authors are invited to attach a note with comments on the reviews and describe possible revisions).

Final verdicts will be either accept, reject, or conditional accept, i.e., the paper is only accepted provided that specific (meta-)reviewer requirements have been met. Please also note the multiple submission policy.

** Optional Supplementary Materials **

Authors are allowed to submit (optional) supplementary materials (e.g., appendices, software, and data) to improve the reproducibility of results and/or to provide additional information that does not fit in the paper. All of the supplementary materials must be zipped into one single file (.tgz or .zip) and submitted via Openreview together with the paper. However, because supplementary materials are completely optional, reviewers may or may not review or even download them. So, the submitted paper should be fully self-contained.

** Preprints **

Papers uploaded to preprint servers (e.g., ArXiv) can be submitted to the workshop. There is no deadline concerning when the papers were made publicly available. However, the version submitted to Eval4NLP must be anonymized, and we ask the authors not to update the preprints or advertise them on social media while they are under review at Eval4NLP.

** Multiple Submission Policy **

Eval4NLP allows authors to submit a paper that is under review in another venue (journal, conference, or workshop) or to be submitted elsewhere during the Eval4NLP review period. However, the authors need to withdraw the paper from all other venues if they get accepted and want to publish in Eval4NLP. Note that AACL and ARR do not allow double submissions. Hence, papers submitted both to the main conference and AACL workshops (including Eval4NLP) will violate the multiple submission policy of the main conference. If authors would like to submit a paper under review by AACL to the Eval4NLP workshop, they need to withdraw their paper from AACL and submit it to our workshop before the workshop submission deadline.

** Best Paper Awards **

We will optionally award prizes to the best paper submissions (subject to availability; more details to come soon). Both long and short submissions will be eligible for prizes.

** Presenting Published Papers **

If you want to present a paper which has been published recently elsewhere (such as other top-tier AI conferences) at our workshop, you may send the details of your paper (Paper title, authors, publication venue, abstract, and a link to download the paper) directly to [email protected]. We will select a few high-quality and relevant papers to present at Eval4NLP. This allows such papers to gain more visibility from the workshop audience and increases the variety of the workshop program. Note that the chosen papers are considered as non-archival here and will not be included in the workshop proceedings.

-------------------------------------------------

Best wishes,

Eval4NLP organizers

Website: https://eval4nlp.github.io/2023/index.html
Email: [email protected]

[1] https://eval4nlp.github.io/2023/index.html
[2] https://eval4nlp.github.io/2023/shared-task.html
[3] https://github.com/acl-org/acl-style-files
[4] https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4NLP&referrer=%5BHomepage%5D(%2F)
[5] https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4NLP_Previously_Reviewed&referrer=%5BHomepage%5D(%2F)


Summary

Eval4NLP 2023 : 4th Workshop on Evaluation and Comparison for NLP systems will take place in Bali, Indonesia. It’s a 1 day event starting on Nov 1, 2023 (Wednesday) and will be winded up on Nov 1, 2023 (Wednesday).

Eval4NLP 2023 falls under the following areas: NLP, ARTIFICIAL INTELLIGENCE, etc. Submissions for this Workshop can be made by Aug 25, 2023. Authors can expect the result of submission by Oct 2, 2023. Upon acceptance, authors should submit the final version of the manuscript on or before Oct 15, 2023 to the official website of the Workshop.

Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines.

Other Details of the Eval4NLP 2023

  • Short Name: Eval4NLP 2023
  • Full Name: 4th Workshop on Evaluation and Comparison for NLP systems
  • Timing: 09:00 AM-06:00 PM (expected)
  • Fees: Check the official website of Eval4NLP 2023
  • Event Type: Workshop
  • Website Link: https://eval4nlp.github.io/2023/index.html
  • Location/Address: Bali, Indonesia


Credits and Sources

[1] Eval4NLP 2023 : 4th Workshop on Evaluation and Comparison for NLP systems


Check other Conferences, Workshops, Seminars, and Events


OTHER NLP EVENTS

SemDial 2024: The 28th Workshop on the Semantics and Pragmatics of Dialogue
Trento, Italy
Sep 11, 2024
GamesandNLP 2024: Games and NLP 2024 Workshop
Turin, Italy
May 21, 2024
GITT 2024: Second International Workshop on Gender-Inclusive Translation Technologies
Sheffield, UK
Jun 27, 2024
LoResMT 2024: The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages
Bangkok, Thailand
Aug 15, 2024
SIGDIAL 2024: The 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Tokyo, Japan
Sep 18, 2024
SHOW ALL

OTHER ARTIFICIAL INTELLIGENCE EVENTS

ICCMA--EI 2024: 2024 The 12th International Conference on Control, Mechatronics and Automation (ICCMA 2024)
Brunel University London, UK
Nov 11, 2024
NLPAI 2024: 2024 5th International Conference on Natural Language Processing and Artificial Intelligence (NLPAI 2024)
Chongqing, China
Jul 12, 2024
ICAITE 2024: 2024 the International Conference on Artificial Intelligence and Teacher Education (ICAITE 2024)
Beijing, China
Oct 12, 2024
Informed ML for Complex Data@ESANN 2024: Informed Machine Learning for Complex Data special session at ESANN 2024
Bruges, Belgium
Oct 9, 2024
Effective Grant Writing Using AI 2024: Invitation to Faculty Development Program Effective Grant Writing Strategies Using AI
Online
Mar 12, 2024
SHOW ALL