The fourth edition of ESPM2 workshop, being proposed to be held as a full-day meeting with the Supercomputing (SC'2018) conference in Dallas, Texas focuses on programming models and runtimes for extreme scale systems. Next generation architectures and systems being deployed are characterized by high concurrency, low memory per-core, and multiple levels of hierarchy and heterogeneity. These characteristics bring out new challenges in energy efficiency, fault-tolerance and, scalability. It is commonly believed that software has the biggest share of the responsibility to tackle these challenges. In other words, this responsibility is delegated to the next generation programming models and their associated middleware/runtimes. This workshop focuses on different aspects of programming models such as task-based parallelism (Charm++, OCR, X10, HPX, etc), PGAS (OpenSHMEM, UPC, CAF, Chapel, UPC++, etc.), BigData (Hadoop, Spark, etc), Deep Learning (Caffe, Microsoft CNTK, Google TensorFlow), directive-based languages (OpenMP, OpenACC) and hybrid MPI+X, etc. It also focuses on their associated middleware (unified runtimes, interoperability for hybrid programming, tight integration of MPI+X, and support for accelerators and FPGAs) for next generation systems and architectures.

The ultimate objective of the ESPM2 workshop is to serve as a forum that brings together researchers from academia and industry working in the areas of programming models, runtime systems, compilation and languages, and application developers.

Best Paper Award

Mellanox has generously offered to sponsor the Best Paper Award. This award will be given to the author(s) of the paper selected by the Technical Program Committee and the Program Chairs. The award will be determined from viewpoints of the technical and scientific merits, impact on the science and engineering of the research work and the clarity of presentation of the research contents in the paper.

Organizing Committee


Program Chairs

Program Committee

8:50 - 9:00

Opening Remarks

Hari Subramoni, Karl Schulz, and Dhabaleswar K (DK) Panda
The Ohio State University and UT Austin

Abstract

Machines with peak performance exceeding one exaflop/s are just around the corner, and promises of sustained exaflop/s machines abound. Are there significant challenges in runtime frameworks and languages that need to be met to harness the power of these machines? We will examine this question and associated issues.

There are some architectural trends that are becoming clear and some hazily appearing. Individual nodes are getting “fatter” computationally. Accelerators such as GPGPUs and possibly FPGAs are likely parts of the exascale landscape. High bandwidth memory, and non-coherent caches (such as the caches in GPGPUs used typically for constant memory), NVRAMS, and resultant deeper and more complex memory hierarchies will also have to be dealt with.

There is an argument going around in the community, that we have already figured out how to deal with tens of thousands of nodes (100,000 with BG/Q), and now since the number of nodes is not likely to increase, we (the extreme-scale HPC community) have to focus research almost entirely on within-node issues. I believe this is not quite a well-founded argument. I will explain why issues of power/energy/temperature, whole machine (multi-job) optimizations, across node issues like communication optimization, load balancing and fault tolerance are still worthy of significant attention of the exascale runtime and language community. At the same time, there exist issues in handling within-node parallelism that arise mainly or only in the context large multi-node runs.

I will also address the question of how our community should approach research if a large segment of funding sources and application community have started thinking some of the above issues are irrelevant. What should the courage of our convictions lead us to?


Bio

Laxmikant Kale

Laxmikant (Sanjay) Kale is the director of the Parallel Programming Laboratory and the Paul and Cynthia Saylor Professor of Computer Science at the University of Illinois at Urbana-Champaign, where he has worked since 1985. His research spans parallel computing, with a focus on adaptive runtime systems, and guided by the belief that only interdisciplinary research involving multiple CSE applications can lead to well-honed abstractions with potential for a long-term impact. His collaborations include biomolecular simulation program NAMD (Gordon-Bell award, 2002), and applications in computational cosmology, quantum chemistry etc. He takes pride in his group's success in distributing software embodying their research ideas, including Charm++, and AMPI.

His degrees include B.Tech, Electronics Engineering from BHU, India (1977), M.E., IISc Bangalore, India (1979) and Ph.D., Computer Science, SUNY, Stony Brook (1985). Prof. Kale is a fellow of the ACM and IEEE, and a winner of the 2012 IEEE Sidney Fernbach award.

10:00 - 10:30

Break

Best Paper Distributed Memory Futures for Compile-Time, Deterministic-by-Default Concurrency in Distributed C++ Applications

Jeremiah Wilke, David Hollman, Cannada Lewis, Aram Markosyan, Nicolas Morales
Sandia National Laboratories
Abstract

Futures are a widely-used abstraction for enabling deferred execution in imperative programs. Deferred execution enqueues tasks rather than explicitly blocking and waiting for them to execute. Many task-based programming models with some form of deferred execution rely on explicit parallelism that is the responsibility of the programmer. Deterministic-by-default (implicitly parallel) models instead use data effects to derive concurrency automatically, alleviating the burden of concurrency management. Both implicitly and explicitly parallel models are particularly challenging for imperative object-oriented programming. Fine-granularity parallelism across member functions or amongst data members may exist, but is often ignored. In this work, we define a general permissions model that leverages the C++ type system and move semantics to define an asynchronous programming model embedded in the C++ type system. Although a default distributed memory semantic is provided, the concurrent semantics are entirely configurable through C++ constexpr integers. Correct use of the defined semantic is verified at compile-time, allowing deterministic- by-default concurrency to be safely added to applications. Here we demonstrate the use of these “extended futures” for distributed memory asynchronous communication and load balancing. An MPI particle-in-cell application is modified with the wrapper class using this task model, with results presented for a Haswell system up to 64 nodes.

Design of Data Management for Multi-SPMD Workflow Programming Model

Thomas Dufaud1, Miwako Tsuji2, Mitsuhisa Sato2
University of Versailles1 Riken Center for Computational Science2
Abstract

As both the complexity of algorithms and architecture increase, development of scientific software becomes a challenge. In order to exploit future architecture, we consider a Multi-SPMD workflow programing model. Then, data transfer between tasks during computation highly depends on the architecture and middleware used. In this study, we design an adaptive system for data management in a parallel programming environment which can express two level of parallelism. We show how the consideration of multiple strategies based on I/O and direct message passing can improve performances and fault tolerance in the YML-XMP environment. On a real application with a sufficiently large amount of local data, speedup of 1.36 for a mixed strategy to 1.73 for a direct message passing method are obtained compared to our original design.

Integration of CUDA Processing within the C++ library for parallelism and concurrency (HPX)

Patrick Diehl1, Hartmut Kaiser1, Thomas Heller1, Madhavan Seshadri2
Louisiana State University1, Nanyang Technological University, Singapore2.
Abstract

Experience shows that on today's high performance systems the utilization of different acceleration cards in conjunction with a high utilization of all other parts of the system is difficult. Future architectures, like exascale clusters, are expected to aggravate this issue as the number of cores are expected to increase and memory hierarchies are expected to become deeper. One big aspect for distributed applications is to guarantee high utilization of all available resources, including local or remote acceleration cards on a cluster while fully using all the available CPU resources and the integration of the GPU work into the overall programming model.

For the integration of CUDA code we extended HPX and enabled asynchronous data transfers from and to the GPU device and the asynchronous invocation of CUDA kernels on this data. Both operations are well integrated into the general programming model of HPX which allows to seamlessly overlap any GPU operation with work on the main cores. Any user defined CUDA kernel can be launched.

We present asynchronous implementations for the data transfers and kernel launches for CUDA code as part of a HPX asynchronous execution graph. Using this approach we can combine all remotely and locally available acceleration cards on a cluster to utilize its full performance capabilities. Overhead measurements show, that the integration of the asynchronous operations as part of the HPX execution graph imposes no additional computational overhead and significantly eases orchestrating coordinated and concurrent work on the main cores and the used GPU devices.

Automatic Generation of High-Order Finite-Difference Code with Temporal Blocking for Extreme-Scale Many-Core Systems

Hideyuki Tanaka1, Youhei Ishihara2, Ryo Sakamoto3, Takashi Nakamura3, Yasuyuki Kimura1, Keigo Nitadori4, Miyuki Tsubouchi4, Jun Makino5
ExaScaler Inc1, Kyoto University2, PEZY Computing3, Riken R-CCS Center for Computational Science4, Kobe University5.
Abstract

In this paper we describe the basic idea, implementation and achieved performance of our DSL for stencil computation, Formura, on systems based on PEZY-SC2 many-core processor. Formura generates, from high-level description of the differential equation and simple description of finite-difference stencil, the entire simulation code with MPI parallelization with overlapped communication and calculation, advanced temporal blocking and parallelization for many-core processors. Achieved performance is 4.78 PF, or 21.5\% of the theoretical peak performance for an explicit scheme for compressive CFD, with the accuracy of fourth-order in space and third-order in time. For a slightly modified implementation of the same scheme, efficiency was slightly lower (17.5\%) but actual calculation time per one timestep was faster by 25\%. Temporal blocking improved the performance by up to 70\%. Even though the B/F number of PEZY-SC2 is low, around 0.02, we have achieved the efficiency comparable to those of highly optimized CFD codes on machines with much higher memory bandwidth such as K computer. We have demonstrated that automatic generation of the code with temporal blocking is a quite effective way to make use of very large-scale machines with low memory bandwidth for large-scale CFD calculations.

12:30 - 2:00

Lunch

Asynchronous Execution of Python Code on Task Based Runtime Systems

Mohammad Tohid1, Bibek Wagle1, Shahrzad Shirzad1, Patrick Diehl1, Adrian Serio1, Alireza Kheirkhahan1, Parsa Amini1, Katy Williams2, Kevin Huck3, Steven Brandt1, Hartmut Kaiser1
Louisiana State University1, University of Arizona2, University of Oregon3.
Abstract

Despite advancements in the areas of par-allel and distributed computing, the complexity ofprogramming on High Performance Computing (HPC)resources has deterred many domain experts, espe-cially in the areas of machine learning and artificialintelligence (AI), from utilizing performance benefitsof such systems. Researchers and scientists favor high-productivity languages to avoid the inconvenience ofprogramming in low-level languages and costs of ac-quiring the necessary skills required for programmingat this level. In recent years, Python, with the sup-port of linear algebra libraries like NumPy, has gainedpopularity despite facing limitations which prevent thiscode from distributed runs. Here we present a solutionwhich maintains both high level programming extrac-tions as well as parallel and distributed efficiency. Phy-lanx, is an asynchronous array processing toolkit whichtransforms Python and NumPy operations into codewhich can be executed in parallel on HPC resources bymapping Python and NumPy functions and variablesinto a dependency tree executed by HPX, a generalpurpose, parallel, task-based runtime system writtenin C++. Phylanx additionally provides introspectionand visualization capabilities for debugging and per-formance analysis. We have tested foundations of ourapproach by comparing our implementation of widelyused machine learning algorithms to accepted NumPystandards.

A Unified Runtime for PGAS and Event-Driven Programming

Sri Raj Paul1, Kun Chen2, Akihiro Hayashi1, Max Grossman1, Vivek Sarkar2
Rice University1, Georgia Institute of Technology2.
Abstract

A well-recognized characteristic of extreme scale systems is that their computation bandwidths far exceed their communication bandwidths. PGAS runtimes have proven to be effective in enabling efficient use of communication bandwidth, due to their efficient support for short nonblocking one-sided messages. However, they were not designed for exploiting the massive levels of intra-node parallelism found in extreme scale systems.

In this paper, we explore the premise that event-driven intra-node runtimes could be promising candidates for integration with PGAS runtimes, due to their ability to overlap computation with long-latency operations. For this, we use OpenSHMEM as an exemplar of PGAS runtimes, and Node.js as an exemplar of event-driven runtimes. While Node.js may seem an unusual choice for high-performance computing, its prominent role as an event-driven runtime for server-side Javascript is a good match for what we need for optimizing the use of communication bandwidth. Availability of excess computation bandwidth in extreme scale systems can help mitigate the local computation overheads of Javascript. Further, since the Node.js environment is single-threaded, we get an automatic guarantee that no data races will occur on Javascript objects, a guarantee that we cannot get with C++.

Our integration of OpenSHMEM and Node.js makes it possible to expose nonblocking PGAS operations as JavaScript constructs that can seamlessly be used with the JavaScript asynchronous mechanisms, such as those based on promises. We believe that the exploration and preliminary results in this paper offer a new direction for future research on building high productivity runtimes for extreme scale systems.

3:00 - 3:30

Break

Portable and Reusable Deep Learning Infrastructure with Containers to Accelerate Cancer Studies

George Zaki
Frederick National Laboratory for Cancer Research.
Abstract

Advanced programming models, domain specific languages, and scripting toolkits have the potential to greatly accelerate the adoption of high performance computing. These complex software systems, however, are often difficult to install and maintain, especially on exotic high-end systems. We consider deep learning workflows used on petascale systems and redeployment on research clusters using containers. Containers are used to deploy the MPI-based infrastructure, but challenges in efficiency, usability, and complexity must be overcome. In this work, we address these challenges through enhancements to a unified workflow system that manages interaction with the container abstraction, the cluster scheduler, and the programming tools. We also report results from running the application on our system, harnessing 298~TFLOPS (single precision).

Analysis of Explicit vs. Implicit Tasking in OpenMP using Kripke

Charles C. Jin, Muthu Baskaran
Reservoir Labs Inc
Abstract

Dynamic task-based parallelism has become a widely-accepted paradigm in the quest for exascale computing. In this work, we deliver a non-trivial demonstration of the advantages of explicit over implicit tasking in OpenMP 4.5 in terms of both expressiveness and performance. We target the Kripke benchmark, a mini-application used to test the performance of discrete particle codes, and find that the dependence structure of the core “sweep” kernel is well-suited for dynamic task-based systems. Our results show that explicit tasking delivers a 31.7% and 8.1% speedup over a pure implicit implementation for a small and large problem, respectively, while a hybrid variant also underperforms the explicit variant by 13.1% and 5.8%, respectively.

Detailed Panel Topic

For many years, HPC systems have consisted of predominately homogenous arrangements of cheap commodity components. More recently, power limitations are leading to stricter energy efficiency demands and new applications and use-cases, such as AI and data analytics, are driving innovations in hardware targeted for HPC systems and large datacenters. Accelerators like GPGPUs take some of the traditional CPU workload but have different design goals and capabilities. High bandwidth memory and non-volatile memory change the expectations and fundamental concepts that underpin traditional programming languages. Compute-in-network, network attached memory/storage, and FPGAs combine functionalities that were traditionally separate, which our traditional programming models struggle to represent well. The driving forces behind these hardware innovations and trends seem set to continue as HPC pushes towards Exascale and beyond. What is being done to help HPC and AI programmers navigate this exciting and changing landscape?

Panel Members

Richard Graham, Senior Solutions Architect, Mellanox Technologies

Adrian Jackson, Research Architect, EPCC, The University of Edinburgh, UK

Chris J. Newburn (CJ), Principal HPC Architect, NVIDIA

Ashish Sirasao, Fellow Engineer, Software and IP team, Xilinx Inc