Re: Network bridge over bond?

Top Page
Attachments:
Message as email
+ (text/html)
+ (text/plain)
Delete this message
Reply to this message
Author: Michael Butash
Date:  
To: plug-discuss
Subject: Re: Network bridge over bond?





RE: bridging, that's just sort of how
      they work, so they can control the mac domain locally and forward
      out.  Including dot1q trunking, and it starts making more sense. 
      Add in things like Openstack, VMware+NSX, anything that does layer
      2 virtualization over vxlan bridging, and bridging is just how
      it's mostly done.  The bridge is the local host "vlan" so to
      speak, extending the switch ports to a local "lan" of sorts,
      namely your bridge and virtual interfaces in it.


      This still creates a software buffer to deal and contend with, as
      well as how they interact with the hardware phy (again, offload
      and such), so make sure you're tuning around the bridges as well,
      txqueues, sysctls, etc for br* too.


      -mb



      On 04/22/2015 02:42 PM, Stephen Partington wrote:




I am working on a ProxmoxVE cluster i have set
          up.


          I am needing a bit better network performance as i am also
          running CEPH for

          the stoage layer


          This is what i have for network configuration is the
          following. it seems to

          be working. the nodes i have configured appear to be running
          with better

          throughput.


          root@computername:~# cat /etc/network/interfaces

          # network interface settings

          auto lo

          iface lo inet loopback


          iface eth0 inet manual


          iface eth1 inet manual


          auto bond0

          iface bond0 inet static

                  address  10.40.216.235

                  netmask  255.255.255.0

                  slaves eth0 eth1

                  bond_miimon 100

                  bond_mode 802.3ad

                  bond_xmit_hash_policy layer2


          auto vmbr0

          iface vmbr0 inet static

                  address  10.40.216.235

                  netmask  255.255.255.0

                  gateway  10.40.216.1

                  bridge_ports bond0

                  bridge_stp off

                  bridge_fd 0


          root@computername:~#


          The backend network these servers are connected to are a Cisco
          Catalyst

          4705R loaded with WS-X4524-GB-RJ45V modules.


          All my research says to use a network bond running LCAP for
          best

          compatability/performance with this hardware. it all seems to
          be running,

          but it is kind of weird that Proxmox would want me to create
          the bridge for

          the VMs to run on kind of makes sense just feels weird to run
          a bond

          inside  bridge.


          If anyone who has worked with proxmox has a better suggestion
          please let me

          know.


          Thanks for your time.


          -- 

          A mouse trap, placed on top of your alarm clock, will prevent
          you from

          rolling over and going back to sleep after you hit the snooze
          button.


          Stephen







---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss




---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss