The seventh edition of ESPM2 workshop, being proposed to be held as a full-day meeting with the Supercomputing (SC22) conference in Dallas, Texas focuses on programming models and runtimes for extreme scale systems. Next generation architectures and systems being deployed are characterized by high concurrency, low memory per-core, and multiple levels of hierarchy and heterogeneity. These characteristics bring out new challenges in energy efficiency, fault-tolerance and scalability. It is commonly believed that software has the biggest share of the responsibility to tackle these challenges. In other words, this responsibility is delegated to the next generation programming models and their associated middleware/runtimes. This workshop focuses on different aspects of programming models such as task-based parallelism (Charm++, Legion, X10, HPX, etc), PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.), Machine Learning (NVIDIA RAPIDS, Scikit-learn etc.), Deep Learning (Caffe, Microsoft CNTK, Google TensorFlow, Facebook PyTorch), directive-based languages (OpenMP, OpenACC) and Hybrid MPI+X, etc. It also focuses on their associated middleware (unified runtimes, interoperability for hybrid programming, tight integration of MPI+X, and support for accelerators) for next generation systems and architectures.

The ultimate objective of the ESPM2 workshop is to serve as a forum that brings together researchers from academia and industry working in the areas of programming models, runtime systems, compilation and languages, and application developers.

ESPM2 2022 will be held as a full day workshop in conjunction with the Supercomputing (SC22) in Dallas, Texas, USA.


ESPM2 2022 welcomes original submissions in a range of areas, including but not limited to:

  • New programming models, languages and constructs for exploiting high concurrency and heterogeneity
  • Experience with and improvements for existing parallel languages and run-time environments such as:
    • MPI
    • PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.)
    • Directive-based programming (OpenMP, OpenACC)
    • Asynchronous Task-based models (Charm++, Legion, X10, HPX, etc)
    • Hybrid MPI+X models
    • Machine Learning (NVIDIA RAPIDS, Scikit-learn etc.), and
    • Deep Learning (TensorFlow, PyTorch, LBANN)
  • Parallel compilers, programming tools, and environments
  • Programming environments for heterogeneous multi-core systems and accelerators such as KNL, OpenPOWER, ARM, GPUs, FPGAs, MICs, and DSPs

Featured Talk


Bronis R. de Supinsk, LLNL

Programming DOE Systems: Exascale and Beyond


Programming exascale systems was seen as a major challenge at the start of the efforts to reach that level of performance. Perhaps not surprisingly, despite predictions of the likely dominance of new languages, users of DOE exascale systems still rely heavily on the MPI + OpenMP model that has dominated HPC for several years. Even emerging C++ abstraction layers such as Kokkos and RAJA often use the familiar MPI + OpenMP model in their backends. Thus, this talk will describe the implementation of the MPI + OpenMP model on the El Capitan and Frontier DOE exascale systems as while as how OpenMP has evolved, and will continue to evolve, to remain a key part of the large-scale programming ecosystem.

Invited Speakers

Panel Discussion

Title: AI for HPC


Ali Jannesari, Iowa State University


Organizing Committee

Program Chairs

Web and Publicity Chair

Program Committee


The workshop does not have a separate registration site. All attendees need to use the registration system provided by SC22. Please remember to select the workshop option when registering. Details about registration can be found on the main conference website.

Travel and Stay