Re: Network bridge over bond?

Top Page
Attachments:
Message as email
+ (text/html)
+ (text/plain)
Delete this message
Reply to this message
Author: Michael Butash
Date:  
To: plug-discuss
Subject: Re: Network bridge over bond?





One thing I tell clients - if you need
      more than a gig, get a 10gbe interface.  That comes with it's own
      challenges too, see if you can get it to use all of that 10gbe...


      The issue you face with using a 802.3ad bond is the flow-hashing. 
      You're using a l2 policy, which I presume to mean "mac source/dst
      flow-hashing".  Basically, if all your communications is outside
      the subnet, you're hashing to one mac, your default gateway, that
      doesn't work well for distributing traffic.  Use a L3-based
      policy, including source/dst ports in tuples, that is what makes
      switches effective.


      Sadly, you also have about the worst server switch ever created,
      namely a legacy cat4k.  Those if you look into the architecture,
      they're 6gbe backplane, spread among 8 port groups (6x8=48 ports),
      that every 1 gbe ports share 1x 1gbe asic interconnect to the
      "fabric" (if you can call it that in 2015).  At least get
      something modern with big buffers on 1gbe ports, like an Arista
      7048T, or if you must stay cisco, nexus 3k/5k and up.


      Those 4k's suck because for the oversubscription, make sure your
      two server ports are spread between two asic groups if you want
      max performance.  Putting 8 hosts trying to talk a gig each will
      just slam the asic into 8:1 oversubscription, they'll all get some
      fraction of 1gbe shared per 8 ports (iperf this if you don't
      believe me, I have).  Also, same as your server, make sure you're
      using "etherchannel load-balance src-dst-mixed-ip-port" for the
      most entropy for distribution of flows at a L3 level using ip,
      port, src/dst as tuples for distribution among your downstream
      (and upstream) paths.


      Your network engineer should do that anyways, if not, spank him to
      buy something outside a ccna-level book or look up the command.


      If you have one source, to one dest, or large "elephant herding"
      flows like single filer connections, flow-distribution does little
      to nothing to help.  Goto first paragraph, get a 10gbe nic.  :)


      -mb



      On 04/22/2015 02:42 PM, Stephen Partington wrote:




I am working on a ProxmoxVE cluster i have set
          up.


          I am needing a bit better network performance as i am also
          running CEPH for

          the stoage layer


          This is what i have for network configuration is the
          following. it seems to

          be working. the nodes i have configured appear to be running
          with better

          throughput.


          root@computername:~# cat /etc/network/interfaces

          # network interface settings

          auto lo

          iface lo inet loopback


          iface eth0 inet manual


          iface eth1 inet manual


          auto bond0

          iface bond0 inet static

                  address  10.40.216.235

                  netmask  255.255.255.0

                  slaves eth0 eth1

                  bond_miimon 100

                  bond_mode 802.3ad

                  bond_xmit_hash_policy layer2


          auto vmbr0

          iface vmbr0 inet static

                  address  10.40.216.235

                  netmask  255.255.255.0

                  gateway  10.40.216.1

                  bridge_ports bond0

                  bridge_stp off

                  bridge_fd 0


          root@computername:~#


          The backend network these servers are connected to are a Cisco
          Catalyst

          4705R loaded with WS-X4524-GB-RJ45V modules.


          All my research says to use a network bond running LCAP for
          best

          compatability/performance with this hardware. it all seems to
          be running,

          but it is kind of weird that Proxmox would want me to create
          the bridge for

          the VMs to run on kind of makes sense just feels weird to run
          a bond

          inside  bridge.


          If anyone who has worked with proxmox has a better suggestion
          please let me

          know.


          Thanks for your time.


          -- 

          A mouse trap, placed on top of your alarm clock, will prevent
          you from

          rolling over and going back to sleep after you hit the snooze
          button.


          Stephen







---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss




---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss