(Chameleon Cloud) (NSF Press Release)
elcome to the Network-Based Computing (NBC) Laboratory at the Computer Science and Engineering Department!!
The objectives of the research group are as follows:
- Proposing new designs for high performance network-based computing systems by taking advantages of modern networking technologies and computing systems
- Developing better middleware, API, and programming environments so that modern network-based computing applications can be developed and implemented in a scalable and high performance manner
- Performing the above research in an integrated manner (by taking systems, networking, and applications into account)
- Focusing on experimental computer science research
Many state-of-the-art and exciting research projects (including MPI, PGAS (UPC and OpenSHMEM), Hybrid MPI+PGAS, Accelerators (GPGPUs and Intel MIC), Virtualization, Power-aware Designs, Cloud Computing and BigData (Memcached, Hadoop (HDFS, MapReduce and HBase)) and I/O File Systems (using SSDs) are being carried out in the group. The MVAPICH2 and MVAPICH2-X (High Performance MPI and MPI+PGAS over InfiniBand, iWARP and RoCE) software packages , developed by his research group, are currently being used by more than 2,200 organizations worldwide (in 73 countries). Multiple software packages for Big Data processing and management, designed and developed by the group under High-Performance Big Data (HiBD) Project are available. These include: 1) RDMA-enabled Apache Hadoop Software package providing native RDMA (InfiniBand Verbs and RoCE) support for multiple components (HDFS, MapReduce and RPC) of Apache Hadoop, 2) RDMA-Memcached Software package for providing native RDMA (InfiniBand Verbs and RoCE) support for Memcached used in Web 2.0 environment, and 3) OSU High-performance Big data Benchmarks (OHB).
The lab is currently supported by funding from U.S. National Science Foundation , U.S. DOE Office of Science, Ohio Board of Regents, ODOD , Cisco Systems, IBM, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun/Oracle; and equipment donations from Advanced Clustering , AMD, Apple, Appro, Chelsio, Dell, Fulcrum Microsystems, Fujitsu , Intel, Mellanox, Microway, NetEffect , QLogic and Sun/Oracle. Other technology partner includes: TotalView Technologies. More details can be found here.
MVAPICH2 software driving NSF's next-generation
Multi-Petaflop Stampede system at TACC. More details
Prof. Panda has received grants from NSF (multiple), IBM Research, Intel,
Prof. Panda has received Innovator Award, OSU College of Engineering, due to the success of MVAPICH project team and the worldwide usage and impact of MVAPICH software.
A paper at TACC-Intel Symposium
Best Student Paper Award.
Professor Panda has delivered Keynote Talks at Cluster '12 and
HPC Advisory Council (HPC China) conferences.
- Papers at Recent and Upcoming Conferences (SC '12, PGAS '12, EuroMPI '12, Cluster '12, ICPP '12, HotI '12, Proper '12, ICS '12, ISC '12, IPDPS '12, AsHES '12, SMTPS '12, CCGrid '12, TI-HPCS '12, and ISPASS '12)[more]
- "Intra-MIC MPI Communication using MVAPICH2: Early Experience", TI-HPCS '12, BEST Student Paper Award
- (NEW) Upcoming Tutorial on InfiniBand and High-speed Ethernet at
Past tutorials at
Cluster '12 ,
HotI '12 ,
CCGrid '12, and
- (NEW) Past Keynote Talk at
HPC Advisory Council China Workshop 2012 and talks on MPI and Big Data at
HPC Advisory Council Switzerland Conference.
- OpenFabrics Monterey Presentations [more]