Partitioned Global Address Space (PGAS) languages are growing in
popularity because of their ability to provide shared memory
programming model over distributed memory machines. In this model,
data can be stored in global arrays and manipulated by individual
compute threads. This model shows promise for expressing algorithms
that have irregular computation and communication patterns. It is
unlikely that such applications written in MPI will be re-written
using the emerging PGAS languages in the near future. But, it is more
likely that parts of these applications will be converted using newer
models. This requires that underlying implementation of system
software be able to support multiple programming models
In this research work, we propose a Unified Runtime for supporting
multiple programming models, Currently our high performance runtime
provides support for MPI and UPC (Unified Parallel C) languages.
Our high performance runtime provides built-in support for load
balancing and work-stealing using multi-end point design. We also
proposed features like 'UPC Queues' for expressing irregular
applications in UPC, in a high performance manner.
M. Luo, J. Jose, S. Sur and D. K. Panda, Multi-threaded UPC Runtime with Network
Endpoints: Design Alternatives and Evaluation on Multi-core Architectures, Int'l
Conference on High Performance Computing (HiPC '11), Dec. 2011.
J. Jose, S. Potluri, M. Luo, S. Sur and D. K. Panda, UPC Queues for Scalable
Graph Traversals: Design and Evaluation on InfiniBand Clusters, Fifth Conference
on Partitioned Global Address Space Programming Model (PGAS '11), Oct. 2011.
J. Jose, M. Luo, S. Sur and D. K. Panda,
Unifying UPC and MPI Runtimes: Experience with MVAPICH,
Fourth Conference on Partitioned Global Address Space Programming Model
(PGAS10), Oct. 2010.