Bonding

Mike Bushroe mbushroe at gmail.com
Mon Sep 10 21:32:37 MST 2018


It is actually probably going to be even worse than that. Consider what
would happen in the setup you are talking about. If you sent ten 1Kbyte
packets over the giga bit and one 1Kbyte packet over the bonded 100M. Each
giga-bit packet would take roughly (1K+64)*8*1.25 nano seconds or a bit
over ten thousand nano seconds each, hundred thousand for all 10. Then,
over the same physical network your bonded 100Mb sends a single 1KB packet
at roughly (1K+64)*8*10 nano seconds or also about a hundred thousand nano
seconds. Using the bonded 100Mb port blocks 10 packets of the same size on
the 1Gb port!

Modern Etherswitches work around this problem by switching the two ports
through different internal lines, but if at any point in the network, like
where you connect to your high speed server or the uplink port going from
one switch to another or going through a older hub, then it necks down
to running both fast and slow packets over the same connection, causing it
to slow to 1/2.

If you have two, parallel, independent networks going to 2 separate NIC
cards on your server you might get the 10% speed increase. But that means
having two completely separate networks to pay for, install, configure, and
maintain. Or if you have a single Etherswitch for your entire network, and
two NIC cards on every computer, server, or other device you want to access
at 10% faster speed you can do that, too. But if you are not careful and
just do a straight forward bonding you may cut your network speed in half
rather than increase it!

Mike


> ---------- Forwarded message ----------
> From: Michael Butash <michael at butash.net>
> To: Main PLUG discussion list <plug-discuss at lists.phxlinux.org>
> Cc:
> Bcc:
> Date: Sun, 9 Sep 2018 13:30:20 -0700
> Subject: Re: bonding
> Keep in mind, bonding nics does not magically give you n+x throughput...
>
> By nature of the technology, there are flow hashes created off source/dest
> mac, ip, or port, that keeps your flows "stuck" to a particular computes
> hash path.  So if you have a single tcp connection with same source,
> destination, and port (ie backup or cifs filer session), it will NOT
> balance across multiple pipes, but rather will just fill one of n in the
> link aggregation bundle.  There are bond settings to control this, but will
> still ultimately be a limitation whether you're talking a linux server or a
> high-end cisco nexus switch.  This works great only when you're a service
> provider with lots of little connections to spread out, not so much a few
> major blasts.
>
> This is a popular misconception among non-networking folks that simply
> bonding multiple circuits gives your more bandwidth, but entirely not the
> case.  If you need more than 100mb, you 1gb.  If you need more than 1gb,
> you go 10gb, etc.  Bonding is more for redundancy than throughput imho.
>
> -mb
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.phxlinux.org/pipermail/plug-discuss/attachments/20180910/1852e151/attachment.html>


More information about the PLUG-discuss mailing list