The Homa Transport Protocol

 

Homa is a new transport protocol for modern datacenters. It was designed in a clean-slate fashion to meet the needs of large scale datacenter applications, and its design is quite different from TCP. Although TCP has a long and illustrious history, virtually every aspect of its design is wrong for the datacenter. As a result, TCP suffers from serious performance problems and it limits the scalability of applications. Homa differs from TCP in every major aspect of its design, and this combination of choices results in many benefits:

  • Homa improves performance for all message sizes and workloads. The improvements are most visible for short messages running in loaded systems, where Homa reduces tail latency by an order of magnitude or more.

  • Homa eliminates the high overheads of per-connection state.

  • Homa eliminates core congestion in datacenter networks.

  • Homa enables more powerful load-balancing techniques, both in applications and in the implementation of the protocol.

  • Homa’s message-based API is a more natural one for datacenter applications than TCP’s streaming approach.

Homa is not just a theoretical abstraction; a production-quality implementation is available as a dynamically loadable kernel module for Linux, and there is preliminary support for Homa in the gRPC remote procedure call framework.

This Wiki is managed by John Ousterhout (ouster at cs dot stanford dot edu). I’m happy to receive comments and criticisms about anything related to Homa or this Wiki. I’m also interested in learning about new material related to Homa that should be referenced here.

Background Information

Here are some introductory materials to help you learn more about what Homa is, how it differs from TCP, and why it is superior to TCP (or any other transport protocol for the datacenter):

  • Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities (SIGCOMM 2018). This is the first paper on Homa; it motivates the protocol and provides initial performance measurements using simulations and an implementation in the RAMCloud storage system.

  • A Linux Kernel Implementation of the Homa Transport Protocol (USENIX ATC 2021). This paper describes the Linux kernel implementation of Homa and reconfirms the benefits reported in the SIGCOMM paper. It also argues that software implementations of transport protocols no longer make sense; in the future, transport protocols will need to be implemented in NIC hardware.

  • A synopsis of the Homa protocol, as implemented in the Linux kernel driver.

  • Homa implements bipartite matchings in an efficient and effective way; read this article for details.

  • It’s Time to Replace TCP in the Datacenter. This position paper argues that every major aspect of TCP’s design is wrong for the datacenter, and that it needs to be replaced (within the datacenter) with a protocol like Homa. The paper has generated followup discussion (if you know of other comments on the paper, please let me know so I can add pointers to the list below):

    • Ivan Pepelnjak has criticized the paper in a blog post.

    • I have responded to Ivan’s posting here.

  • Buffer Usage in Homa. Homa’s usage of switch buffer space is controversial and potentially problematic; this article discusses the topic in detail.

  • A newsletter article by Larry Peterson advocating for RPC transport protocols.

Using Homa

Here are resources available to anyone who would like to use Homa in applications.

  • HomaModule: an open source kernel module for Linux that implements Homa.

  • grpc_homa: adds support for Homa to the gRPC RPC framework. As of 6/2022, C++ support is operational and Java support is under development.

  • go-homa: a Go client for the Homa kernel module.

Homa Simulators

There are two simulators for Homa available in open-source form. However, neither of these correctly simulates the entire Homa protocol, and in particular, neither is appropriate for incast measurements.

  • The OMNet++ simulation used for measurements in the original Homa SIGCOMM paper. This simulator was written by Behnam Montazeri. It has undergone extensive validation, and we have a high level of confidence in its results. However, this simulator was developed primarily for designing and evaluating Homa’s congestion control mechanism; so it does not simulate all aspects of the Homa protocol. In addition, it is no longer being actively maintained. Here are some of the know issues with this simulator:

    • It assumes infinite buffer space

    • It does not simulate packet drops or timeouts

    • It uses the same value of rttBytes for all source-destination pairs (instead of having different values depending on whether the source and destination are in the same rack).

    • It simulates only messages, not RPCs

    • It does not implement the Homa incast optimization, so it is not suitable for experiments involving incast

    • Its implementation of overcommitment is out of date and not the same as the Linux kernel implementation

  • A newer NS-3 simulation developed by Serhat Arslan. Serhat hopes it will eventually become a reference standard for Homa. However, this simulator is missing several Homa features (such as Homa’s incast optimization) and, as of October 2022 it has multiple open bugs. For full details, see the open issues in the GitHub repo. Results from this simulator should not be considered trustworthy.

Several recent papers have presented alternative transport protocols and compared them to Homa; in many cases they claim superior performance. However, none of the papers claiming superior performance has actually compared against a correct and complete implementation of Homa. Instead, they use a version of Homa that has been hobbled in some way. As far as I can tell, Homa is in fact superior to all of these protocols. Specifically:

  • Aeolus claims that Homa’s buffer needs exceed the capacity of modern switches, so that packets will be dropped and Homa will suffer poor performance; it proposes modifications to the protocol in order to reduce buffer usage. However, Aeolus is based on a false assumption about switch buffer architecture; the problem it solves does not exist in reality. Furthermore, the performance of the modified protocol is far inferior to Homa (this is camouflaged by the way performance results are presented). See this critique for more information.

  • PowerTCP is a new congestion control mechanism, which was compared to Homa and several other protocols. The paper claims significant performance improvements over Homa, including a 99% drop in tail latency. However, the Homa simulator used in the paper is at a relatively early stage and has numerous problems, causing it to understate Homa’s performance. As a result, the paper’s measurements and conclusions about Homa should not be considered reliable; in addition, there are good reasons to believe that Homa will actually outperform PowerTCP. See this article for details.

  • dcPIM is a new proposal for datacenter network transport, which applies a bipartite matching technique from switches to schedule network flows across a datacenter. The paper claims that dcPIM produces “near optimal” results both in terms of tail latency and in terms of network utilization, and compares dcPIM to other protocols. The paper chose not to compare directly with Homa; instead, it uses the Aeolus version of Homa, claiming this version represents a “better tradeoff”. The results for the Aeolus version of Homa are predictably poor, and there is good reason to believe Homa will outperform dcPIM (Homa’s matching algorithm is superior to dcPIM’s). See this article for more details.

  • Note: if you are writing a paper that compares against Homa and are interested in feedback to be sure you have implemented and/or measured Homa correctly, I would be more than happy to review your work.

Other Resources