The 4500's with sup6+46xx modules and higher suck less, going to a 24gb backplane (2:1 oversubscription), and I think with sup8 goes full line-rate (finally).  If you chew 8 ports to yourself on an asic boundary, you'll be ok.

Most of the 4k sups should support l3 flow-hashing mechanisms, match up your server to distribute traffic as much as possible with flows too.  Session persistence == re-hashing of flows with a new src-port as a tuple to redistribute.

Also, also add channels in ^2, ie. 2, 4, 8, 16, etc.  Look up geeky explanations why having to do with binary xor's and such why anything else is a problem.

Learned this the hard way in school of hard knocks in large and/or growing service provider and enteriprise environments.

-mb


On 04/22/2015 09:48 PM, cryptworks@gmail.com wrote:
Good information all. Buying hardware is not going to happen as this is mainly a lab. But i may have access to a newer version of the switch I am using as some hardware just got upgraded internally. 

Thanks for the tip on layer 3 I am pretty sure we can support that. 

But i am saturating the single GB connection. 


Sent by Outlook for Android



On Wed, Apr 22, 2015 at 9:15 PM -0700, "Michael Butash" <michael@butash.net> wrote:

RE: bridging, that's just sort of how they work, so they can control the mac domain locally and forward out.  Including dot1q trunking, and it starts making more sense.  Add in things like Openstack, VMware+NSX, anything that does layer 2 virtualization over vxlan bridging, and bridging is just how it's mostly done.  The bridge is the local host "vlan" so to speak, extending the switch ports to a local "lan" of sorts, namely your bridge and virtual interfaces in it.

This still creates a software buffer to deal and contend with, as well as how they interact with the hardware phy (again, offload and such), so make sure you're tuning around the bridges as well, txqueues, sysctls, etc for br* too.

-mb


On 04/22/2015 02:42 PM, Stephen Partington wrote:
I am working on a ProxmoxVE cluster i have set up.

I am needing a bit better network performance as i am also running CEPH for
the stoage layer

This is what i have for network configuration is the following. it seems to
be working. the nodes i have configured appear to be running with better
throughput.

root@computername:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet static
        address  10.40.216.235
        netmask  255.255.255.0
        slaves eth0 eth1
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet static
        address  10.40.216.235
        netmask  255.255.255.0
        gateway  10.40.216.1
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

root@computername:~#

The backend network these servers are connected to are a Cisco Catalyst
4705R loaded with WS-X4524-GB-RJ45V modules.

All my research says to use a network bond running LCAP for best
compatability/performance with this hardware. it all seems to be running,
but it is kind of weird that Proxmox would want me to create the bridge for
the VMs to run on kind of makes sense just feels weird to run a bond
inside  bridge.

If anyone who has worked with proxmox has a better suggestion please let me
know.

Thanks for your time.

--
A mouse trap, placed on top of your alarm clock, will prevent you from
rolling over and going back to sleep after you hit the snooze button.

Stephen



---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.phxlinux.org
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss