Design and Evaluation of Advanced Message Queuing Protocol (AMQP) over InfiniBand


Message Oriented Middleware (MOM) plays a key role in enterprise data distribution. The strength of MOM is that it allows for communication between applications situated on heterogeneous operating systems and networks. MOM allows developers to by-pass the costly process of building explicit connections between these varied systems and networks. Advanced Message Queue Protocol (AMQP) has emerged as an open standard for MOM communication. AMQP grew out of the need for better messaging integration both within and across enterprise boundaries. InfiniBand promises to be a high performance network platform for AMQP communication.


  • Study typical messaging intensive/critical workloads. Compare the performance of such workloads on traditional Ethernet with emerging networking technologies such as InfiniBand.
  • Identify key areas for improvement in terms of more optimal implementation of the services provided by AMQP.
  • Design and develop benchmarks for AMQP.
  • Design and evaluate native implementation of AMQP with InfiniBand Verbs API.


The following figure shows the general architecture of an AMQP compliant messaging system. An AMQP messaging system consists of 3 main components: Publisher(s), Consumer(s) and Broker/Server(s). Each component can be multiple in number and be situated on independent hosts. Publishers and Consumers communicate with each other through message queues bound to Exchanges within the Brokers.


Sample Results

In our tests, we use a cluster consisting of Intel Xeon Quad dual-core processor host nodes. Each node has 6GB RAM and is equipped with a 1 Gigabit Ethernet Network Interface Controller (NIC), as well as with an InfiniBand Host Channel Adapter (HCA). The IB HCAs are DDR ConnectX using Open Fabrics Enterprise Distribution (OFED) 1.3 drivers. The operating system for each node is Red Hat Enterprise Linux 4U4. Our Message Oriented Middleware is Apache Qpid Release M3 Alpha, an AMQP compliant, open source distribution. Design and Evaluation of Benchmarks for AMQP over InfiniBand

Our AMQP benchmarks which are modeled after the OSU Micro-benchmarks for MPI . One thing to note here is that, unlike the OSU benchmarks, our benchmarks do not assume a direct, one link, point-to-point network connection. Within AMQP, a message must always traverse the Broker host in route to the destination Consumer. This incorporates at least two network links into any message's travel path.

Three variables inherent in any AMQP operation are the number of Publishers, the number of Consumers, and the Exchange type. Each of our benchmarks exercises one or more of these variables. Furthermore each of our benchmarks measures performance for data capacity, message rate, and speed. Data capacity is the amount of raw data in MegaBytes (MB) that may be transmitted per second, irrespective of the number of messages. This is also known as Bandwidth. Message rate is similar to data capacity, but measures the number of discrete messages transmitted per second. Message rate is also known as Throughput. Speed is the average time one message takes to travel from the publisher to the consumer. This speed measure is commonly referred to as Latency.

Individuals interest in obtaining the Qpid C++ code for our benchmarks may contact Professor D.K. Panda


Here we show the speed achieved for varying message sizes using the Direct Exchange - Single Publisher Single Consumer (DE-SPSC) Benchmark over IPoIB, 1 GigE and SDP, respectively. As we can see, for small messages, IPoIB achieves the best latency. For larger messages, SDP achieves better latency. SDP requires a larger connection setup time as compared to IPoIB. As a result, the connection setup time dominates the total data transfer time for smaller messages resulting in higher latencies when we use SDP.

Conferences & Workshops (1)


Technical Reports (1)

1 G. Marsh, A. Sampat, S. Potluri, and D. K. Panda, Scaling Advanced Message Queuing Protocol (AMQP) Architecture with Broker Federation and InfiniBand , OSU Technical Report (OSU-CISRC-5/09-TR17)