Overview

Welcome to the Network-Based Computing (NBC) Laboratory at the Computer Science and Engineering Department!

Many state-of-the-art and exciting research projects are being carried out by various members of the group, including:

The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 2650 organizations in 83 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of September'16, more than 390,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

Multiple software libraries for Big Data processing and management, designed and developed by the group under High-Performance Big Data (HiBD) Project are available. These include: 1) RDMA-enabled Apache Hadoop Software library providing native RDMA (InfiniBand Verbs and RoCE) support for multiple components (HDFS, MapReduce and RPC) of Apache Hadoop; 2) RDMA-enabled Spark Software library providing native RDMA (InfiniBand Verbs and RoCE) support; 3) RDMA-Memcached Software library for providing native RDMA (InfiniBand Verbs and RoCE) support for Memcached used in Web 2.0 environment; and 4) OSU High-performance Big data Benchmarks (OHB). Sample performance numbers and download instructions for these packages are available from the above-mentioned website. These libraries are currently being used by more than 190 organizations worldwide (in 26 countries). More than 17,900 downloads of this software have taken place from the project website alone.


The objectives of the research group are as follows:

  • Proposing new designs for high performance network-based computing systems by taking advantages of modern networking technologies and computing systems
  • Developing better middleware, API, and programming environments so that modern network-based computing applications can be developed and implemented in a scalable and high performance manner
  • Performing the above research in an integrated manner (by taking systems, networking, and applications into account)
  • Focusing on experimental computer science research

Multiple Positions available in the group:
1) Post-Doc/Research Scientist
2) MPI Software Engineer/Programmer

The projects in the Laboratory are funded by U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, Ohio Department of Development, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView.

Announcements


(NEW) 5th Annual MVAPICH User Group (MUG) Meeting will be taking place on August 14-16, 2017 in Columbus, Ohio, USA. Click here for more details.

High Performance Big Data Computing (HPBDC) The Third International Workshop (HPBDC '17) will be held in conjunction with IPDPS '17. Details are available.

Upcoming Tutorials: Accelerating Big Data Processing with Hadoop, Spark and Memcached at ASPLOS 2017, CCGrid 2017, PEARC17, and ISCA 2017. Past Tutorials presented at: HPCA 2017, Supercomputing 2016, Hot Interconnect 2016, Field Programmable Logic and Applications (FPL '16), and IEEE Cluster 2016.

Upcoming Tutorials: MVAPICH2 and MPI-T at PEARC17, InfiniBand (IB) and High-Speed Ethernet (HSE) at ISC '17, Past tutorials: MPI+PGAS at PPoPP '17, IB and HSE at SC '16, MVAPICH2 optimization and tuning at XSEDE '16, MPI+PGAS at IEEE Cluster '16, and ICS '16.

The HiBD team Provided Big Data Computing Expertise for Neuroscience in NSF BD-Spoke Project.