...
No. It’s important to distinguish between oversubscription and overload. Oversubscription means that the aggregate bandwidth of all of the host downlinks exceeds the aggregate bandwidth of the network core. In an oversubscribed system, if every host were to transmit at full bandwidth to destinations across the datacenter, the core network could become overloaded. However, overload virtually never happens in practice, because hosts typically use only a small fraction of their uplink bandwidth and some of the traffic targets neighbors attached to the same top-of-rack switch. As a result, even with oversubscription, core networks tend to run at relatively low utilization. It would not be cost-effective to underprovision core networks so that they can’t keep up with the actual loads, because this would result in under-utilization of the more expensive host machines.
Does Homa have problems with excessive buffer usage?
No; Homa uses less buffer space than TCP. A recent paper on Aeolus claimed that Homa overloads buffers, resulting in dropped packets and poor performance. However, the paper was based on a false assumption about how buffers are managed in modern switches, and its results are invalidThis is an open question. This claim has been made by a few recent papers, most notably Aeolus. However, these papers based their claims on inaccurate assumptions about switch buffer management. For example, the Aeolus paper assumes that switch buffer space is statically divided among egress ports, whereas in fact switches provide shared pools of buffers, so they can handle brief spikes at a particular port. See the Aeolus rebuttal for more information.
Homa is arguably optimal in its use of buffers. In order to achieve the best performance for a new message in an unloaded network, a sender must be able to unilaterally transmit enough data to cover the time it takes to get a packet to the receiver, process that packet in software, and return some sort of signal back to the sender. This is what Homa does; it calls this unscheduled data and uses the term rttBytes to refer to the amount of unscheduled data that may be sent for each message. If a sender sends less than rttBytes before hearing back from the receiver, then network bandwidth will be wasted. Buffering will occur at the receiver’s downlink if several new messages begin transmitting around the same time. With Homa, the receiver detects the buffering as soon as it receives one packet from each message, and it immediately takes steps to reduce the buffering by withholding grants; again, this is optimal.
Buffering as described above cannot be avoided without sacrificing performance in the unloaded case; if this is a tradeoff you’re willing to make, Homa can be configured to reduce the amount of unscheduled data sent for new messagesdiscussion of the Aeolus claims, and see this article for a more comprehensive discussion of Homa buffer usage.
Is Homa resilient against dropped packets?
Homa is resilient in that it will detect dropped packets and retransmit them, but Homa assumes that drops are extremely rare. If packets are dropped frequently, Homa will perform poorly. Packet drops from corruption are extremely rare, so the only thing to worry about is buffer overflow. Fortunately, Homa’s buffer usage is low enough to make buffer overflow extremely unlikely in modern switches.
How does Homa handle incast?
...