Next generation architectures and systems being deployed are characterized by high concurrency, low memory per-core, and multi-levels of hierarchy and heterogeneity. These characteristics bring out new challenges in energy efficiency, fault-tolerance and scalability. It is commonly believed that software has the biggest share of the responsibility to tackle these challenges. In other words, this responsibility is delegated to the next generation programming models and their associated middleware/runtimes. This workshop focuses on different aspects of programming models such as Task-based parallelism (X10, OCR, Habanero, Legion, Charm++, HPX), PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.), Directive-based languages (OpenMP, OpenACC), Accelerator programming (CUDA, OpenCL), Hybrid MPI+X, etc. It also focuses on their associated middleware (unified runtimes, interoperability for hybrid programming, tight integration of MPI+X, support for accelerators) for next generation systems and architectures. Objective of ESPM2 workshop is to serve as a forum that brings together researchers from academia and industry to share knowledge and their experience, on working in the areas of programming models, runtime systems, compilation and languages, and application developers.

Topics of interest to the ESPM2 workshop include (but are not limited to):

  • New programming models, languages and constructs for exploiting high concurrency and heterogeneity
  • Experience with and improvements for existing parallel languages and run-time environments such as:
    • MPI
    • PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.)
    • Directive-based programming (OpenMP, OpenACC)
    • Asynchronous Task-based models (Charm++, OCR, Habanero, Legion, X10, HPX, etc)
    • Hybrid MPI+X models
    • BigData (Hadoop, Spark, etc), and
    • Deep Learning (Caffe, Microsoft CNTK, Google TensorFlow)
  • Parallel compilers, programming tools, and environments
  • Software and system support for extreme scalability including fault tolerance
  • Programming environments for heterogeneous multi-core systems and accelerators such as KNL, OpenPOWER, ARM, GPUs, FPGAs, MICs and DSPs

Papers should present original research and should provide sufficient background material to make them accessible to the broader community.

Workshop Program

8:45 - 9:00

Opening Remarks

9:00 - 10:00


Speaker: Thomas Sterling, Professor of Electrical Engineering Department of Intelligent Systems Engineering Director, Center for Research in Extreme Scale Technologies School of Informatics and Computing, Indiana University

Title: The Quantum Step in Parallel Execution through Dynamic Adaptive Runtime and Programming Strategies

Abstract: With the technology asymptote of nano-scale and the approaching end of Moore's Law, continued performance gains must rely on other than incremental extensions of conventional practices and exploit innovations in execution concepts, system (hardware and software) structures, and operational methods. An expanding community of researchers and developers are actively exploring the quantum step of dynamic adaptive computing techniques for significant gains in efficiency and scalability for performance advantage. These approaches have severally involved runtime control, programming models, compiler enhancements, and even architecture extensions to break the bottlenecks of blocking due to overheads, contention, latencies and starvation due to static supervision. Instead, these new tools and environments enable dynamic resource management and task scheduling with various degrees of introspection including parallelism discovery from extracted and exploited from meta-data. But this inchoate revolution is challenged in an exciting way with a diversity of open issues, wide variation in integrated semantics, and alternatives of implementation mechanisms. Perhaps most importantly is that the opportunity for advantage has still yet to be proven or rigorously bounded. This presentation will provide a discussion of the domain of choices and questions that this pioneering discipline is investigating and the cross-cutting system solutions that have yet to be determined. Specific examples will be drawn from the speakers experiences with HPX-5 runtime but references to the broad range of endeavors will be included. Questions from the audience are welcome and encouraged throughout this address.

10:00 - 10:30

Coffee Break

10:30 - 12:10

Research Paper Session I: Full Papers

Best Paper Awardee In-Staging Data Placement for Asynchronous Coupling of Task-Based Scientific Workflows , Qian Sun, Melissa Romanus, Tong Jin, Hongfeng Yu, Peer-Timo Bremer, Steve Petruzza, Scott Klasky and Manish Parashar

SWE-X10: Simulating shallow water waves with lazy activation of patches using ActorX10 , Alexander Pöppl, Michael Bader, Tobias Schwarzer and Michael Glass

PGAS Communication Runtime for Extreme Large Data Computation , Ryo Matsumiya and Toshio Endo

A Scalable Task Parallelism Approach For LU Decomposition With Multicore CPUs , Verinder Rana, Meifeng Lin and Barbara Chapman

12:10 - 12:30

Research Paper Session II: Short Papers

Performance Portability in the Uintah Runtime System Through the Use of Kokkos , Daniel Sunderland, Brad Peterson, John Schmidt, Alan Humphrey, Jeremy Thornock and Martin Berzins

Runtime Coordinated Heterogeneous Tasks in Charm++ , Michael Robson, Ronak Buch and Laxmikant Kale

12:30 - 12:40

Closing Remarks