IMPACT SCORE JOURNAL RANKING CONFERENCE RANKING Conferences Journals Workshops Seminars SYMPOSIUMS MEETINGS BLOG LaTeX 5G Tutorial Free Tools
ScaDL 2020 : Scalable Deep Learning over Parallel And Distributed Infrastructures
ScaDL 2020 : Scalable Deep Learning over Parallel And Distributed Infrastructures

ScaDL 2020 : Scalable Deep Learning over Parallel And Distributed Infrastructures

New Orleans
Event Date: May 22, 2020 - May 22, 2020
Submission Deadline: February 01, 2020
Notification of Acceptance: February 28, 2020
Camera Ready Version Due: March 15, 2020




About

Recently, Deep Learning (DL) has received tremendous attention in the research community because of the impressive results obtained for a large number of machine learning problems. The success of state-of-the-art deep learning systems relies on training deep neural networks over a massive amount of training data, which typically requires a large-scale distributed computing infrastructure to run. In order to run these jobs in a scalable and efficient manner, on cloud infrastructure or dedicated HPC systems, several interesting research topics have emerged which are specific to DL. The sheer size and complexity of deep learning models when trained over a large amount of data makes them harder to converge in a reasonable amount of time. It demands advancement along multiple research directions such as, model/data parallelism, model/data compression, distributed optimization algorithms for DL convergence, synchronization strategies, efficient communication and specific hardware acceleration.

SCADL seeks to advance the following research directions:

  • Asynchronous and Communication-Efficient SGD: Stochastic gradient descent is at the core of large-scale machine learning. Parallelizing SGD gradient computation across multiple nodes increases the data processed per iteration, but exposes the SGD to communication and synchronization delays and unpredictable node failures in the system. Thus, there is a critical need to design robust and scalable distributed SGD methods to achieve fast error-convergence in spite of such system variabilities.
  • High performance computing aspects: Deep learning is highly compute intensive. Algorithms for kernel computations on commonly used accelerators (e.g. GPUs), efficient techniques for communicating gradients and loading data from storage are critical for training performance.
  • Model and Gradient Compression Techniques: Techniques such as reducing weights and the size of weight tensors help in reducing the compute complexity. Using lower-bit representations allow for more optimal use of memory and communication bandwidth.

This intersection of distributed/parallel computing and deep learning is becoming critical and demands specific attention to address the above topics which some of the broader forums may not be able to provide. The aim of this workshop is to foster collaboration among researchers from distributed/parallel computing and deep learning communities to share the relevant topics as well as results of the current approaches lying at the intersection of these areas.


Call for Papers

Areas of Interest

In this workshop, we solicit research papers focused on distributed deep learning aiming to achieve efficiency and scalability for deep learning jobs over distributed and parallel systems. Papers focusing both on algorithms as well as systems are welcome. We invite authors to submit papers on topics including but not limited to:

  • Deep learning on HPC systems
  • Deep learning for edge devices
  • Model-parallel and data-parallel techniques
  • Asynchronous SGD for Training DNNs
  • Communication-Efficient Training of DNNs
  • Model/data/gradient compression
  • Learning in Resource constrained environments
  • Elasticity training of machine learning and deep learning jobs
  • Hyper-parameter tuning for deep learning jobs
  • Hardware Acceleration for Deep Learning
  • Scalability of deep learning jobs on large number of nodes
  • Deep learning on heterogeneous infrastructure
  • Efficient and Scalable Inference
  • Data storage/access in shared networks for deep learning jobs

Author Instructions

ScaDL 2020 accepts submissions in three categories:

  • Regular papers: 8-10 pages
  • Short papers: 4 pages
  • Extended abstracts: 1 page

The aforementioned lengths include all technical content, references and appendices.

Papers should be formatted using IEEE conference style, including figures, tables, and references. The IEEE conference style templates for MS Word and LaTeX provided by IEEE eXpress Conference Publishing are available for download. See the latest versions at https://www.ieee.org/conferences/publishing/templates.html

Submission Link

https://easychair.org/conferences/?conf=scadl2020

Deadlines

Submission deadline: Feb 1, 2020

Notifications: Feb 28, 2020

Camera Ready deadline: March 15, 2020

General Chairs

Christopher Carothers, RPI, USA

Ashish Verma, IBM Research AI, USA

Program Committee Chairs

K. R. Jayaram, IBM Research AI, USA

Parijat Dube, IBM Research AI, USA

Program Committee

Kangwook Lee, KAIST, Korea

Li Zhang, IBM Research, USA

Xiangru Lian, U Rochester, USA

Eduardo Rocha Rodrigues, IBM, Brazil

Wagner Meira Jr., UFMG, Brazil

Stacy Patterson, RPI, USA

Alex Gittens, RPI, USA

Catherine Schuman, ORNL, USA

Ignacio Blanquer, UPV, Spain

Leandro Balby Marinho, UFCG, Brazil

Chen Wang, IBM Research, USA

Publicity Chair

Danilo Ardagna, Politecnico di Milano, Italy

Steering Committee

Vijay K. Garg, University of Texas at Austin

Vinod Muthusamy, IBM Research AI

Yogish Sabharwal, IBM Research AI

Danilo Ardagna, Politecnico di Milano



Summary

ScaDL 2020 : Scalable Deep Learning over Parallel And Distributed Infrastructures will take place in New Orleans. It’s a 1 day event starting on May 22, 2020 (Friday) and will be winded up on May 22, 2020 (Friday).

ScaDL 2020 falls under the following areas: PARALLEL COMPUTING, DEEP LEARNING, etc. Submissions for this Workshop can be made by Feb 01, 2020. Authors can expect the result of submission by Feb 28, 2020. Upon acceptance, authors should submit the final version of the manuscript on or before Mar 15, 2020 to the official website of the Workshop.

Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines.

Other Details of the ScaDL 2020

  • Short Name: ScaDL 2020
  • Full Name: Scalable Deep Learning over Parallel And Distributed Infrastructures
  • Timing: 09:00 AM-06:00 PM (expected)
  • Fees: Check the official website of ScaDL 2020
  • Event Type: Workshop
  • Website Link: https://2020.scadl.org
  • Location/Address: New Orleans


Credits and Sources

[1] ScaDL 2020 : Scalable Deep Learning over Parallel And Distributed Infrastructures


Check other Conferences, Workshops, Seminars, and Events


OTHER PARALLEL COMPUTING EVENTS

SPC 2024: 10th Workshop on Scheduling for Parallel Computing
Ostrava, Czech Republic
Sep 9, 2024
MAMHYP 2024: Seventh Workshop on Models, Algorithms and Methodologies for Hybrid Parallelism in new HPC Systems (MAMHYP-24)
Ostrava, Czech Republic
Sep 9, 2024
PPAM 2024: 15th International Conference on Parallel Processing & Applied Mathematics
Ostrava, Czech Republic
Sep 8, 2024
PCDS 2024: The 1st International Symposium on Parallel Computing and Distributed Systems
Singapore
Sep 21, 2024
ISPDC 2024: 23rd International Symposium on Parallel and Distributed Computing
Chur, Switzerland
Jul 8, 2024
SHOW ALL

OTHER DEEP LEARNING EVENTS

DL for Neuro-heuristic Brain Analysis 2024: Workshop on Deep Learning for Neuro-heuristic Brain Analysis @ ICANN'24
Lugano, Switzerland
Sep 17, 2024
SS-GAIMHS 2024: Special Session on Generative AI for Medical and Healthcare System
Paris, France
Jun 26, 2024
FMLDS 2024: International Conference on Future Machine Learning and Data Science
Sydney
Nov 20, 2024
CVMI@CVPR 2024: 9th IEEE Workshop on Computer Vision for Microscopy Image Analysis (CVMI) @ CVPR 2024
Seattle WA, USA
Jun 18, 2024
MMCM@CVPR 2024: 2nd IEEE Workshop on Multimodal Content Moderation (MMCM) @ CVPR 2024
Seattle WA, USA
Jun 17, 2024
SHOW ALL