Call for Papers
Optimization is a cornerstone of nearly all modern machine learning (ML) and deep learning (DL). Simple first-order gradient-based methods dominate the field for convincing reasons: low computational cost, simplicity of implementation, and strong empirical results.
Yet second- or higher-order methods are rarely used in DL, despite also having many strengths: faster per-iteration convergence, frequent explicit regularization on step-size, and better parallelization than SGD. Additionally, many scientific fields use second-order optimization with great success.
A driving factor for this is the large difference in development effort. By the time higher-order methods were tractable for DL, first-order methods such as SGD and it’s main variants (SGD + Momentum, Adam, …) already had many years of maturity and mass adoption.
The purpose of this workshop is to address this gap, to create an environment where higher-order methods are fairly considered and compared against one-another, and to foster healthy discussion with the end goal of mainstream acceptance of higher-order methods in ML and DL.
- Amir Gholami (UC Berkeley)
- Coralia Cartis (University of Oxford)
- Frank E. Curtis (Lehigh University)
- Donald Goldfarb (Columbia University)
- Madeleine Udell (Stanford University)
****CALL FOR PAPERS****
We welcome submissions to the workshop under the general theme of “Order up! The Benefits of Higher-Order Optimization in Machine Learning”. Some examples of acceptable topics include:
- Higher-order methods,
- Adaptive gradient methods,
- Novel higher-order-friendly models,
- Higher-order theory papers,
- and many more.
For submission details, please see https://order-up-ml.github.io/CFP/. Please use our CMT submission portal which can be found at the following link: https://cmt3.research.microsoft.com/HOOML2022.
Submission deadline: September 22, 2022 (AOE)
Acceptance notification: October 20, 2022 (AOE)
Final version due: TBD
- Albert S. Berahas (University of Michigan)
- Jelena Diakonikolas (University of Wisconsin-Madison)
- Jarad Forristal (University of Texas at Austin)
- Brandon Reese (SAS Institute Inc.)
- Martin Takáč (MBZUAI)
- Yan Xu (SAS Institute Inc.)
Credits and Sources
| HOO 2022 : Order up! The Benefits of Higher-Order Optimization in Machine Learning: NeurIPS 2022|