Next generation architectures and systems being deployed are characterized by high concurrency, low memory per-core, and multiple levels of hierarchy and heterogeneity. These characteristics bring out new challenges in energy efficiency, fault-tolerance and, scalability. It is commonly believed that software has the biggest share of the responsibility to tackle these challenges. In other words, this responsibility is delegated to the next generation programming models and their associated middleware/runtimes. This workshop focuses on different aspects of programming models such as task-based parallelism (Charm++, OCR, Habanero, Legion, X10, HPX, etc), PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.), BigData (Hadoop, Spark, etc), Deep Learning (Caffe, Microsoft CNTK, Google TensorFlow), directive-based languages (OpenMP, OpenACC) and Hybrid MPI+X, etc. It also focuses on their associated middleware (unified runtimes, interoperability for hybrid programming, tight integration of MPI+X, and support for accelerators) for next generation systems and architectures.

The ultimate objective of the ESPM2 workshop is to serve as a forum that brings together researchers from academia and industry working in the areas of programming models, runtime systems, compilation and languages, and application developers.

Best Paper Award

Intel has generously offered to sponsor the Best Paper Award. This award will be given to the author(s) of the paper selected by the Technical Program Committee and the Program Chairs. The award will be determined from viewpoints of the technical and scientific merits, impact on the science and engineering of the research work and the clarity of presentation of the research contents in the paper.

Keynote Address

We are happy to announce that Prof. William D. Gropp, Interim Director and Chief Scientist at the National Center for Supercomputing Applications and the Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign will deliver the keynote address at ESPM2'17.

Panel

Panel: Effective Programming Models for Deep Learning at scale

    Artificial intelligence (AI) has been an interesting research topic for many decades but has struggled to enter mainstream use. Deep Learning (DL) is one form of AI that has recently become more practicable and useful because of dramatic increases in the computational power and in the amount of training data available. Research labs are already using Deep Learning to progress scientific investigations in numerous fields. Commercial enterprises are starting to make product development and marketing decisions based on machine learning models. However, there is a worrying skills gap between the hype and the reality of getting business benefit from Deep Learning. To address this, we need to answer some urgent questions. What practical programming techniques (specifically, programming models and middleware options) should we be teaching new recruits into this area? What existing knowledge and experience (from HPC or elsewhere) should existing practitioners be leveraging? Do traditional big-iron supercomputers and HPC software techniques (including MPI or PGAS) have a place in this vibrant new sphere or is all about high-level scripting, complex workflows, and elastic cloud resources?

Panel Moderator : Daniel Holmes, EPCC, The University of Edinburgh, UK.

Panel Members

  • Mike Houston, Senior Distinguished Engineer, NVIDIA
  • Prabhat, Data and Analytics Group Lead, NERSC, Lawrence Berkeley National Laboratory
  • Jeff Squyres, MPI Architect at Cisco Systems, Inc.
  • Rick Stevens, Associate Laboratory Director, Argonne National Laboratory
  • More details coming soon!

Organizing Committee


Program Chairs

Program Committee

Registration

The workshop does not have a separate registration site. All attendees need to use the registration system provided by SC'17. Please remember to select the workshop option when registering. Details about registration can be found on the main conference website.

Travel and Stay