The eigth edition of ESPM2 workshop, being proposed to be held as a full-day meeting with the Supercomputing (SC23) conference in Denver, Colorado focuses on programming models and runtimes for extreme scale systems. Next generation architectures and systems being deployed are characterized by high concurrency, low memory per-core, and multiple levels of hierarchy and heterogeneity. These characteristics bring out new challenges in energy efficiency, fault-tolerance and scalability. It is commonly believed that software has the biggest share of the responsibility to tackle these challenges. In other words, this responsibility is delegated to the next generation programming models and their associated middleware/runtimes. This workshop focuses on different aspects of programming models such as task-based parallelism (Charm++, Legion, X10, HPX, etc), PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.), Machine Learning (NVIDIA RAPIDS, Scikit-learn etc.), Deep Learning (Caffe, Microsoft CNTK, Google TensorFlow, Facebook PyTorch), directive-based languages (OpenMP, OpenACC) and Hybrid MPI+X, etc. It also focuses on their associated middleware (unified runtimes, interoperability for hybrid programming, tight integration of MPI+X, and support for accelerators) for next generation systems and architectures.

The ultimate objective of the ESPM2 workshop is to serve as a forum that brings together researchers from academia and industry working in the areas of programming models, runtime systems, compilation and languages, and application developers.

ESPM2 2023 will be held as a full day workshop in conjunction with the Supercomputing (SC23) in Denver, Colorado, USA.

Topics

ESPM2 2023 welcomes original submissions in a range of areas, including but not limited to:

  • New programming models, languages and constructs for exploiting high concurrency and heterogeneity
  • Experience with and improvements for existing parallel languages and run-time environments such as:
    • MPI
    • PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.)
    • Directive-based programming (OpenMP, OpenACC)
    • Asynchronous Task-based models (Charm++, Legion, X10, HPX, etc)
    • Hybrid MPI+X models
    • Machine Learning (NVIDIA RAPIDS, Scikit-learn etc.), and
    • Deep Learning (TensorFlow, PyTorch, LBANN)
  • Parallel compilers, programming tools, and environments
  • Programming environments for heterogeneous multi-core systems and accelerators such as KNL, OpenPOWER, ARM, GPUs, FPGAs, MICs, and DSPs

Featured Talk


Speaker

Kalyan Kumaran, Argonne National Laboratory

Aurora Exascale Architecture

Abstract:

Aurora is an exascale supercomputer in the final stages of assembly at the Argonne Leadership Computing Facility (ALCF) in the U.S. This talk will focus on the Aurora hardware and software architectures with emphasis on the interconnect and programming models, and their impact on application performance and scalability.

Invited Speakers


Panel Discussion

Title: Top 5 Challenges  in Programming Models and Runtimes for Large Language Models Training/Inference


Moderator

Zhao Zhang, Rutgers University

Members

  • Torsten Hoefler, ETH Zurich
  • Leon Song, Microsoft
  • Rick Stevens, University of Chicago
  • Rio Yokota, Tokyo Institute of Technology, Japan
  • Organizing Committee


    Program Chairs

    Web and Publicity Chair

    Program Committee

    Registration

    The workshop does not have a separate registration site. All attendees need to use the registration system provided by SC23. Please remember to select the workshop option when registering. Details about registration can be found on the main conference website.

    Travel and Stay