IMPACT SCORE JOURNAL RANKING CONFERENCE RANKING Conferences Journals Workshops Seminars SYMPOSIUMS MEETINGS BLOG LaTeX 5G Tutorial Free Tools
Bench 2025 : The 17th BenchCouncil International Symposium on Evaluation Science and Engineering
Bench 2025 : The 17th BenchCouncil International Symposium on Evaluation Science and Engineering

Bench 2025 : The 17th BenchCouncil International Symposium on Evaluation Science and Engineering

Chengdu, China
Event Date: December 03, 2025 - December 04, 2025
Abstract Submission Deadline: July 24, 2025
Submission Deadline: July 31, 2025
Notification of Acceptance: August 30, 2025
Camera Ready Version Due: October 31, 2025




Call for Papers

CALL FOR PAPERS

==========================================================

The 17th BenchCouncil International Symposium on Evaluation Science and Engineering (Bench 2025)

https://www.benchcouncil.org/bench2025/

Abstract Deadline: July 24, 2025, 08:00 PM AoE
Submission Deadline: July 31, 2025, 08:00 PM AoE
Notification: August 30, 2025, 11:59 PM AoE
Final Papers Due: October 31, 2025, 11:59 PM AoE
Conference Date: December 3 - 4, 2025

Venue: Chengdu, China
Chengdu is the homeland of giant pandas, embraced by enchanting landscapes, rich artistic heritage, and irresistible cuisine. It is a city where timeless tradition harmonizes with modern rhythm.

Submission website: https://bench2025.hotcrp.com

==========================================================


Introduction
----------------

The Bench conference has hosted 16 successful versions as the BenchCouncil Symposium on Benchmarking, Measuring and Optimizing. This year, we have rebranded it as a cutting-edge conference on Evaluation Science and Engineering, fully aligned with the mission of the International Open Benchmark Council (BenchCouncil). Evaluation is an essential human activity universally present. The discipline of Evaluation Science and Engineering (Evaluatology), pioneered by BenchCouncil, aims to develop comprehensive, rigorous, and scientific evaluation methodologies - transcending ad-hoc or empirical approaches.

Bench 2025 upholds this mission. It is an interdisciplinary international symposium seeking contributions from various domains including computer science, AI, medicine, education, finance, psychology, business, and more. We particularly welcome state-of-the-practice work, crucial for bridging research and real-world impact.


Highlights
-----------------

- Official release of the monograph: Evaluatology: The Science and Engineering of Evaluation
- Award ceremony for the prestigious BenchCouncil Achievement Award. The past recipients include Turing Award laureates.
- Updates from six International Evaluatology Research centers and three standardization working groups focusing on Open Source, LLM and low-altitude economy.


Organization
-----------------

General Co-Chairs
Jianfeng Zhan, ICT, Chinese Academy of Sciences, China
Weiping Li, Oklahoma State University, USA and Civil Aviation Flight University of China, China

Program Co-Chairs
Lin Zou, Civil Aviation Flight University of China, China
Wei Wang, East China Normal University, China

Program Vice-Chairs
Fanda Fan, University of Chinese Academy of Sciences, China
Yushan Su, Waymo LLC, USA

Bench Steering Committee
Jack J. Dongarra, University of Tennessee, USA
Geoffrey Fox, Indiana University, USA
D. K. Panda, The Ohio State University, USA
Felix Wolf, TU Darmstadt, Germany
Xiaoyi Lu, University of California, Merced, USA
Resit Sendag, University of Rhode Island, USA
Wanling Gao, ICT, Chinese Academy of Sciences, China
Jianfeng Zhan, BenchCouncil, China

Award Committee
2025 BenchCouncil Achievement Award Committee:
D. K. Panda, The Ohio State University, USA
Lizy Kurian John, The University of Texas at Austin, USA
Geoffrey Fox, Indiana University, USA
Jianfeng Zhan, University of Chinese Academy of Sciences, China
Tony Hey, Rutherford Appleton Laboratory STFC, UK
David J. Lilja, University of Minnesota, Minneapolis, USA
Jack J. Dongarra, University of Tennessee, USA
John L. Henning, Oracle, USA
Lieven Eeckhout, Universiteit Gent, Belgium

Web Co-chairs
Jiahui Dai, BenchCouncil


Call for papers
------------------------

The Bench conference encompasses a wide range of topics in benchmarks, datasets, metrics, indexes, measurement, evaluation, optimization, supporting methods and tools, and other best practices in computer science, medicine, finance, education, management, etc. Bench's multidisciplinary and interdisciplinary emphasis provides an ideal environment for developers and researchers from different areas and communities to discuss practical and theoretical work. The topics of interest include, but are not limited to the following:

-- Evaluation theory and methodology
** Formal specification of evaluation requirements
** Development of evaluation models
** Design and implementation of evaluation systems
** Analysis of evaluation risk
** Cost modeling for evaluations
** Accuracy modeling for evaluations
** Evaluation traceability
** Identification and establishment of evaluation conditions
** Equivalent evaluation conditions
** Design of experiments
** Statistical analysis techniques for evaluations
** Methodologies and techniques for eliminating confounding factors in evaluations
** Analytical modeling techniques and validation of models
** Simulation and emulation-based modeling techniques and validation of models
** Development of methodologies, metrics, abstractions, and algorithms specifically tailored for evaluations

-- The engineering of evaluation
** Benchmark design and implementation
** Benchmark traceability
** Establishing least equivalent evaluation conditions
** Index design, implementation
** Scale design, implementation
** Evaluation standard design and implementations
** Evaluation and benchmark practice
** Tools for evaluations
** Real-world evaluation systems
** Testbed

-- Data set
** Explicit or implicit problem definition deduced from the data set
** Detailed descriptions of research or industry datasets, including the methods used to collect the data and technical analyses supporting the quality of the measurements
** Analyses or meta-analyses of existing data
** Systems, technologies, and techniques that advance data sharing and reuse to support reproducible research
** Tools that generate large-scale data while preserving their original characteristics
** Evaluating the rigor and quality of the experiments used to generate the data and the completeness of the data description

-- Benchmarking
** Summary and review of state-of-the-art and state-of-the-practice
** Searching and summarizing industry best practice
** Evaluation and optimization of industry practice
** Retrospective of industry practice
** Characterizing and optimizing real-world applications and systems
** Evaluations of state-of-the-art solutions in the real-world setting

-- Measurement and testing
** Workload characterization
** Instrumentation, sampling, tracing, and profiling of large-scale, real-world applications and systems
** Collection and analysis of measurement and testing data that yield new insights
** Measurement and testing-based modeling (e.g., workloads, scaling behavior, and assessment of performance bottlenecks)
** Methods and tools to monitor and visualize measurement and testing data
** Systems and algorithms that build on measurement and testing-based findings
** Reappraisal of previous empirical measurements and measurement-based conclusions
** Reappraisal of previous empirical testing and testing-based conclusions


Paper Submission
------------------------

Papers must be submitted in PDF. For a full paper, the page limit is 15 pages in the LNCS format, not including references. For a short paper, the page limit is 12 pages in the LNCS format, not including references. The review process follows a strict double-blind policy per the established Bench conference norms. The submissions will be judged based on the merit of the ideas rather than the length. After the conference, the proceedings will be published by Springer LNCS (Pending, Indexed by EI). Extended versions of selected outstanding papers will be invited to BenchCouncil Transactions on Benchmarks, Standards and Evaluations (May 2025 CiteScore: 16).

Please note that the LNCS format is the final one for publishing. At least one author must pre-register for the symposium, and at least one author must attend the symposium to present the paper. Papers for which no author is pre-registered will be removed from the proceedings.

Formatting Instructions
Please make sure your submission satisfies ALL of the following requirements:
- All authors and affiliation information must be anonymized.
- Paper must be submitted in printable PDF format.
- Please number the pages of your submission.
- The submission must be formatted for black-and-white printers. Please make sure your figures are readable when printed in black and white.
- The submission must describe unpublished work that is not currently under review of any other conference or journal venues.

Submission site: https://bench2025.hotcrp.com/
LNCS latex template: https://www.benchcouncil.org/file/llncs2e.zip


Technical Program Committees
------------------------

Ana Gainaru, Oak Ridge National Laboratory, USA
Bin Hu, Institute of Computing Technology, Chinese Academic of Science, China
Biwei Xie, Institute of Computing Technology, Chinese Academic of Science, China
Ce Zhang, ETH Zurich, Switzerland
Chen Zheng, Institute of software, Chinese Academy of Sciences, China
Chunjie Luo, Institute of Computing Technology, Chinese Academic of Science, China
Emmanuel Jeannot, Inria
Fei Sun, Meta, USA
Gang Lu, Tencent, China
Gregory Diamos, Baidu, China
Guangli Li, Institute of Computing Technology, Chinese Academic of Science, China
Gwangsun Kim, Pohang University of Science and Technology, Korean
Khaled lbrahim, Lawrence Berkeley National Laboratory, USA
Krishnakumar Nair, Facebook, USA
Lei Wang, Institute of Computing Technology, Chinese Academic of Science, China
Mario Marino, Leeds Beckett University, UK
Miaoqing Huang, University of Arkansas, USA
Murali Emani, Argonne National Laboratory, USA
Nana Wang, Henan University, China
Narayanan Sundaram, Meta, USA
Nicolas Rougier, Inria, France
Peter Mattson, Google, USA
Piotr Luszczek, University of Tennessee, USA
Rui Ren, Beijing Open Source IC Academy, China
Sascha Hunold, TU Wien, Austria
Shengen Yan, SenseTime, China
Shin-ying Lee, AMD, USA
Steven Farrell, Lawrence Berkeley National Laboratory, USA
Vladimir Getov, University of Westminster, UK
Wanling Gao, Institute of Computing Technology, Chinese Academic of Science, China
Woongki Baek, Ulsan National Institute of Science and Technology, Korean
Xiaoyi Lu, University of California, Merced, USA
Yunyou Huang, Guangxi Normal University, China
Zhen Jia, Amazon, China


Summary

Bench 2025 : The 17th BenchCouncil International Symposium on Evaluation Science and Engineering will take place in Chengdu, China. It’s a 2 days event starting on Dec 3, 2025 (Wednesday) and will be winded up on Dec 4, 2025 (Thursday).

Bench 2025 falls under the following areas: EVALUATION, BENCHMARK, DATASET, MEASUREMENT AND TESTING, etc. Submissions for this Symposium can be made by Jul 31, 2025. Authors can expect the result of submission by Aug 30, 2025. Upon acceptance, authors should submit the final version of the manuscript on or before Oct 31, 2025 to the official website of the Symposium.

Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines.

Other Details of the Bench 2025

  • Short Name: Bench 2025
  • Full Name: The 17th BenchCouncil International Symposium on Evaluation Science and Engineering
  • Timing: 09:00 AM-06:00 PM (expected)
  • Fees: Check the official website of Bench 2025
  • Event Type: Symposium
  • Website Link: https://www.benchcouncil.org/bench2025/
  • Location/Address: Chengdu, China


Credits and Sources

[1] Bench 2025 : The 17th BenchCouncil International Symposium on Evaluation Science and Engineering


Check other Conferences, Workshops, Seminars, and Events


OTHER EVALUATION EVENTS

IISWC 2025: IEEE International Symposium on Workload Characterization
Irvine, CA
Oct 12, 2025
WSRT 2025: 2nd Workshop on Social Reading Technology
Campinas, Brazil
Jun 23, 2025
GEM shared task 2024: GEM 2024 multilingual data-to-text and summarization shared task
Tokyo, Japan
QNDE 2024: 51st Annual Review of Progress in Quantitative Nondestructive Evaluation
Denver, CO
Jul 22, 2024
HEAd 2024: 10th International Conference on Higher Education Advances
Valencia
Jun 18, 2024
SHOW ALL

OTHER BENCHMARK EVENTS

Bench 2023: The 15th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing
Sanya, China
Dec 3, 2023
Bench 2019: 2019 International Symposium on Benchmarking, Measuring and Optimizing
Denver, Colorado, USA
Nov 14, 2019
SHOW ALL

OTHER DATASET EVENTS

ML-ESG 2024: FinNLP-KDF@LREC-COLING 2024 Shared Task: ML-ESG 3
Torina, Italy
May 20, 2024
BMD 2019: The 2nd International Workshop on Big Media Dataset Construction, Management and Applications
Los Angeles, CA, USA
Dec 9, 2019
Workshop Multiview Learning, ECML PKDD 2019: Call for Papers and Datasets, ECML PKDD'2019 workshop on Data and Machine Learning Approaches with Multiple Views
Würzbürg, Germany
Sep 20, 2019
SHOW ALL