Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This is an open question. This claim has been made by a few recent papers, most notably Aeolus. However, these papers based their claims on inaccurate assumptions about switch buffer management. For example, the Aeolus paper assumes that switch buffer space is statically divided among egress ports, whereas in fact switches provide shared pools of buffers, so they can handle brief spikes at a particular port. See the Aeolus rebuttal for more discussion of the Aeolus claims.

There has been no problem with buffer overflows in the existing implementations of Homa. For example, the worst-case buffer consumption in benchmarks of the Linux kernel implementation was about 8.5 MB, for a switch with 13 MB capacity (these benchmarks used a 25 Gbps network). Our implementation of Homa in RAMCloud, which used Infiniband networking, also had no problems with buffer overflows, though we did not measure Homa’s actual buffer usage.

Extrapolations to newer 100 Gbps switching chips, such as Broadcom’s Tomahawk-3, suggest there may be challenges for Homa. To see this, take the ratio of total required buffer space to total host downlink bandwidth; this has units of time. It seems plausible that this ratio will remain constant as network speeds scale. In the Linux kernel implementation, Homa used 8.5 MB of buffer space to drive 40 nodes at 25 Gbps: the ratio is 68 microseconds of buffer space. Tomahawk-3 switches offer 128 ports at 100 Gbps, for 12.8 Gbps total bandwidth, and they have 64 MB of buffer space, which is 40 usecs worth. This would appear to be insufficient for Homa. However, with 2:1 oversubscription, only ⅔ of the switch bandwidth will be for downlinks. Assuming that there will be little or no buffering on the uplinks (since Homa can use packet spraying), this would result in 60 usecs of buffering on the downlinks, which is very close to what Homa needs.

Another consideration is that our measurements indicate that TCP needs at least as much buffer space as Homa. Thus, any switch that works for TCP is likely to work for Homa.

It appears that newer switching chips are increasing their bandwidth faster than their buffer space. Suppose there comes a time where switches no longer have enough buffer space for Homa: will that make Homa useless?

No. To date we have not made any attempt to reduce Homa’s buffer usage, but it seems likely that buffer usage could be reduced significantly. For example, most buffer usage comes from either unscheduled packets or overcommittment. In the worst case, these could be scaled back to reduce buffer consumption (but this would come at some cost in performance). We also have ideas for a optimizations that might reduce buffer usage without any performance impact. See the projects page for details.

Bottom line: it is premature to declare that Homa is impractical because of its buffer usage when (a) we have actual implementation experience that shows this is not a problem, and (b) we have ideas how to reduce buffer usage in the future if that should be needed, and see this article for a more comprehensive discussion of Homa buffer usage.

Is Homa resilient against dropped packets?

...