High Performance Runtime for PGAS Models (OpenSHMEM, UPC and CAF)
Partitioned Global Address Space (PGAS) languages are growing in popularity because of their ability to provide shared memory programming model over distributed memory machines. In this model, data can be stored in global arrays and manipulated by individual compute threads. This model shows promise for expressing algorithms that have irregular computation and communication patterns. It is unlikely that such applications written in MPI will be re-written using the emerging PGAS languages in the near future. But, it is more likely that parts of these applications will be converted using newer models. This requires that underlying implementation of system software be able to support multiple programming models simultaneously.
In this research work, we propose a Unified Runtime for supporting multiple programming models, Currently our high performance runtime provides support for MPI and UPC (Unified Parallel C) languages. Our high performance runtime provides built-in support for load balancing and work-stealing using multi-end point design. We also proposed features like 'UPC Queues' for expressing irregular applications in UPC, in a high performance manner.
|1||K. Hamidouche, A. Venkatesh, A. Awan, H. Subramoni, and DK Panda, CUDA-Aware OpenSHMEM: Extensions and Designs for High Performance OpenSHMEM on GPU Clusters , ParCo: Elsevier Parallel Computing Journal , .|
Conferences & Workshops (22)
Ph.D. Disserations (3)
|1||M. Li, Designing High-Performance Remote Memory Access for MPI and PGAS Models with Modern Networking Technologies on Heterogeneous Clusters, Nov 2017|
|2||J. Jose, Designing High Performance and Scalable Unified Communication Runtime (UCR) for HPC and Big Data Middleware, Aug 2014|
|3||S. Potluri, Enabling Efficient Use of MPI and PGAS Programming Models on Heterogeneous Clusters with High Performance Interconnects, May 2014|