Categories |
MACHINE LEARNING
DATA MINING
DESIGN
|
About |
A vital part of proposing new machine learning and data mining approaches is evaluating them empirically to allow an assessment of their capabilities. Numerous choices go into setting up such experiments: how to choose the data, how to preprocess them (or not), potential problems associated with the selection of datasets, what other techniques to compare to (if any), what metrics to evaluate, etc. and last but not least how to present and interpret the results. Learning how to make those choices on-the-job, often by copying the evaluation protocols used in the existing literature, can easily lead to the development of problematic habits. Numerous, albeit scattered, publications have called attention to those questions [1-5] and have occasionally called into question published results, or the usability of published methods. At a time of intense discussions about a reproducibility crisis in natural, social, and life sciences, and conferences such as SIGMOD, KDD, and ECML/PKDD encouraging researchers to make their work as reproducible as possible, we therefore feel that it is important to bring researchers together, and discuss those issues on a fundamental level. An issue directly related to the first choice mentioned above is the following: even the best-designed experiment carries only limited information if the underlying data are lacking. We therefore also want to discuss questions related to the availability of data, whether they are reliable, diverse, and whether they correspond to realistic and/or challenging problem settings. |
Call for Papers |
In this workshop, we mainly solicit contributions that discuss those questions on a fundamental level, take stock of the state-of-the-art, offer theoretical arguments, or take well-argued positions, as well as actual evaluation papers that offer new insights, e.g. question published results, or shine the spotlight on the characteristics of existing benchmark data sets. As such, topics include, but are not limited to
The workshop will feature a mix of invited speakers, a number of accepted presentations with ample time for questions since those contributions will be less technical, and more philosophical in nature, and a panel discussion on the current state, and the areas that most urgently need improvement, as well as recommendation to achieve those improvements. An important objective of this workshop is a document synthesizing these discussions that we intend to publish at a prominent venue. |
Summary |
EDML 2020 : 2nd Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning will take place in Ghent, Belgium. It’s a 5 days event starting on Sep 14, 2020 (Monday) and will be winded up on Sep 18, 2020 (Friday). EDML 2020 falls under the following areas: MACHINE LEARNING, DATA MINING, DESIGN, etc. Submissions for this Workshop can be made by Jun 09, 2020. Authors can expect the result of submission by Jul 07, 2020. Upon acceptance, authors should submit the final version of the manuscript on or before Jul 21, 2020 to the official website of the Workshop. Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines. Other Details of the EDML 2020
|
Credits and Sources |
[1] EDML 2020 : 2nd Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning |