About |
The workshop aims to highlight advancements to event-driven and data-driven models of computation for extreme scale computing, as well as parallel and distributed computing for high-performance computing. It also aims at fostering exchanges between dataflow practitioners, both at the theoretical and practical levels. Workshop theme:With the advent of true many-core systems, it has become unreasonable to solely rely on control-based parallel models of computation to achieve high scalability. Dataflow-inspired models of computation, once discarded by the sequential programming crowd, are once again considered serious contenders to help increase programmability, performance, and scalability in highly parallel and extreme scale systems, but also power and energy efficiency, as they (at least partially) relieve the parallel application programmer from performing tedious and perilous synchronization bookkeeping, but also provide clear scheduling points for the system software and hardware. However to reach such high scalability levels, extreme scale systems rely on heterogeneity, hierarchical memory subsystems, etc. Meanwhile, legacy programming and execution models, such as MPI and OpenMP, add asynchronous and data-driven constructs to their models, all the while trying to take into account the very complex hardware targeted by parallel applications. Consequently, programming and execution models, trying to combine both legacy control flow-based and data flow-based aspects of computing, have also become increasingly complex to handle. Developing new models and their implementation, from the application programmer level, to the system level, down to the hardware level is key to provide better data- and event-driven systems which can efficiently exploit the wealth of diversity that composes current high-performance systems, for extreme scale parallel computing. To this end, the whole stack, from the application programming interface down to the hardware must be investigated for programmability, performance, scalability, energy and power efficiency, as well as resiliency and fault-tolerance. |
Call for Papers |
Scope of the workshop:Researchers and practitioners all over the world, from both academia and industry, working in the areas of language, system software, and hardware design, parallel computing, execution models, and resiliency modeling are invited to discuss state of the art solutions, novel issues, recent developments, applications, methodologies, techniques, experience reports, and tools for the development and use of data flow models of computation. Topics of interest include, but are not limited to, the following:
Likely participants: Computer engineers and computer scientists, parallel computing and compiler researchers for high-performance computing. |
Summary |
DFM 2020 : International Workshop on Data-Flow Models of Computation of Extreme-Scale Computing will take place in Madrid, Spain. It’s a 5 days event starting on Jul 13, 2020 (Monday) and will be winded up on Jul 17, 2020 (Friday). DFM 2020 falls under the following areas: HIGH-PERFORMANCE COMPUTING, DATAFLOW, PARALLEL COMPUTING, EXECUTION MODEL, etc. Submissions for this Workshop can be made by Apr 09, 2020. Authors can expect the result of submission by May 01, 2020. Upon acceptance, authors should submit the final version of the manuscript on or before May 15, 2020 to the official website of the Workshop. Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines. Other Details of the DFM 2020
|
Credits and Sources |
[1] DFM 2020 : International Workshop on Data-Flow Models of Computation of Extreme-Scale Computing |