Overview
Welcome to the Network-Based Computing (NBC) Laboratory at the Computer Science and Engineering Department!
Many state-of-the-art and exciting research projects are being carried out by various members of the group, including:
- High Performance MPI on Infiniband Cluster
- High Performance Runtime for PGAS Models (OpenSHMEM, UPC and CAF)
- Programming model support for GPU and Accelerators
- Networking, Storage and Middleware Support for Cloud Computing and DataCenters
- High Performance Computing with Virtualization
- Scalable Filesystems and I/O
- Performance Evaluation of Cluster Networking and I/O Technologies
- BigData (Hadoop,Spark & Memcached)
- High-Performance Deep Learning and Machine Learning
The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 3,425 organizations in 90 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Nov '24, more than 1,840,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.
Multiple software libraries for Big Data processing and management, designed and developed by the group under High-Performance Big Data (HiBD) Project are available. These include: 1) RDMA-enabled Apache Hadoop Software library providing native RDMA (InfiniBand Verbs and RoCE) support for multiple components (HDFS, MapReduce and RPC) of Apache Hadoop; 2) RDMA-enabled Spark Software library providing native RDMA (InfiniBand Verbs and RoCE) support; 3) RDMA-Memcached Software library for providing native RDMA (InfiniBand Verbs and RoCE) support for Memcached used in Web 2.0 environment; and 4) OSU High-performance Big data Benchmarks (OHB). Sample performance numbers and download instructions for these packages are available from the above-mentioned website. These libraries are currently being used by more than 370 organizations worldwide in 30 countries. More than 49,750 downloads of this software have taken place from the project website alone.
The objectives of the research group are as follows:
- Proposing new designs for high performance network-based computing systems by taking advantages of modern networking technologies and computing systems
- Developing better middleware, API, and programming environments so that modern network-based computing applications can be developed and implemented in a scalable and high performance manner
- Performing the above research in an integrated manner (by taking systems, networking, and applications into account)
- Focusing on experimental computer science research
Multiple Positions available in the group:
1) Post-Doc/Research Scientist
2) MPI Software Engineer/Programmer
The projects in the Laboratory are funded by U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, arm, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, Pattern Computer, QLogic, ROCKPORT, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, arm, Broadcom, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, Pattern Computer, QLogic, ROCKPORT and Sun. Other technology partner includes: TotalView.
Announcements
(NEW) Join us for the Upcoming Tutorials at Supercomputing 2024: 1. High-Performance and Smart Networking Technologies for HPC and AI, 2. Principles and Practice of High-Performance Deep Learning/Machine Learning Training and Inference, and 3. Scalable Big Data Processing on High-Performance Computing Systems More Details
The 12th Annual MVAPICH User Group (MUG) Conference was held successfully in a hybrid manner on August 19-21, 2024 with more than 220 attendees.
Partnership and contribution to the NSF-Awarded $20M AI-Institute on Intelligent CyberInfrastructure (ICICLE). Details.