<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">The 4500's with sup6+46xx modules and
higher suck less, going to a 24gb backplane (2:1
oversubscription), and I think with sup8 goes full line-rate
(finally). If you chew 8 ports to yourself on an asic boundary,
you'll be ok.<br>
<br>
Most of the 4k sups should support l3 flow-hashing mechanisms,
match up your server to distribute traffic as much as possible
with flows too. Session persistence == re-hashing of flows with a
new src-port as a tuple to redistribute.<br>
<br>
Also, also add channels in ^2, ie. 2, 4, 8, 16, etc. Look up
geeky explanations why having to do with binary xor's and such why
anything else is a problem.<br>
<br>
Learned this the hard way in school of hard knocks in large and/or
growing service provider and enteriprise environments.<br>
<br>
-mb<br>
<br>
<br>
On 04/22/2015 09:48 PM, <a class="moz-txt-link-abbreviated" href="mailto:cryptworks@gmail.com">cryptworks@gmail.com</a> wrote:<br>
</div>
<blockquote
cite="mid:DAFBD17B7C872DF9.5-d1d6cc17-b0f6-433c-a32d-dbd33d39b673@mail.outlook.com"
type="cite">Good information all. Buying hardware is not going to
happen as this is mainly a lab. But i may have access to a newer
version of the switch I am using as some hardware just got
upgraded internally.
<div><br>
</div>
<div>Thanks for the tip on layer 3 I am pretty sure we can support
that. </div>
<div><br>
</div>
<div>But i am saturating the single GB connection. <br>
<br>
<br>
<div id="acompli_signature">Sent by <a moz-do-not-send="true"
href="http://taps.io/outlookmobile">Outlook</a> for Android<br>
</div>
</div>
<br>
<br>
<br>
<div class="gmail_quote">On Wed, Apr 22, 2015 at 9:15 PM -0700,
"Michael Butash" <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:michael@butash.net" target="_blank">michael@butash.net</a>></span>
wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="3D"ltr"">
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
<div class="moz-cite-prefix">RE: bridging, that's just sort
of how they work, so they can control the mac domain
locally and forward out. Including dot1q trunking, and it
starts making more sense. Add in things like Openstack,
VMware+NSX, anything that does layer 2 virtualization over
vxlan bridging, and bridging is just how it's mostly
done. The bridge is the local host "vlan" so to speak,
extending the switch ports to a local "lan" of sorts,
namely your bridge and virtual interfaces in it.<br>
<br>
This still creates a software buffer to deal and contend
with, as well as how they interact with the hardware phy
(again, offload and such), so make sure you're tuning
around the bridges as well, txqueues, sysctls, etc for br*
too.<br>
<br>
-mb<br>
<br>
<br>
On 04/22/2015 02:42 PM, Stephen Partington wrote:<br>
</div>
<blockquote
cite="mid:CACS_G9z9V-Sdat7k3G=ACjMoDyVXtVdQ+dW-5NEKFCJkN=OftQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_default" style="font-family:trebuchet
ms,sans-serif">I am working on a ProxmoxVE cluster i
have set up.<br>
<br>
I am needing a bit better network performance as i am
also running CEPH for<br>
the stoage layer<br>
<br>
This is what i have for network configuration is the
following. it seems to<br>
be working. the nodes i have configured appear to be
running with better<br>
throughput.<br>
<br>
root@computername:~# cat /etc/network/interfaces<br>
# network interface settings<br>
auto lo<br>
iface lo inet loopback<br>
<br>
iface eth0 inet manual<br>
<br>
iface eth1 inet manual<br>
<br>
auto bond0<br>
iface bond0 inet static<br>
address 10.40.216.235<br>
netmask 255.255.255.0<br>
slaves eth0 eth1<br>
bond_miimon 100<br>
bond_mode 802.3ad<br>
bond_xmit_hash_policy layer2<br>
<br>
auto vmbr0<br>
iface vmbr0 inet static<br>
address 10.40.216.235<br>
netmask 255.255.255.0<br>
gateway 10.40.216.1<br>
bridge_ports bond0<br>
bridge_stp off<br>
bridge_fd 0<br>
<br>
root@computername:~#<br>
<br>
The backend network these servers are connected to are
a Cisco Catalyst<br>
4705R loaded with WS-X4524-GB-RJ45V modules.<br>
<br>
All my research says to use a network bond running
LCAP for best<br>
compatability/performance with this hardware. it all
seems to be running,<br>
but it is kind of weird that Proxmox would want me to
create the bridge for<br>
the VMs to run on kind of makes sense just feels weird
to run a bond<br>
inside bridge.<br>
<br>
If anyone who has worked with proxmox has a better
suggestion please let me<br>
know.<br>
<br>
Thanks for your time.<br>
<br>
-- <br>
A mouse trap, placed on top of your alarm clock, will
prevent you from<br>
rolling over and going back to sleep after you hit the
snooze button.<br>
<br>
Stephen<br>
<br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">---------------------------------------------------
PLUG-discuss mailing list - <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:PLUG-discuss@lists.phxlinux.org">PLUG-discuss@lists.phxlinux.org</a>
To subscribe, unsubscribe, or to change your mail settings:
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.phxlinux.org/mailman/listinfo/plug-discuss">http://lists.phxlinux.org/mailman/listinfo/plug-discuss</a></pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>