Re: A question regarding LAG/Ping on game servers.

Top Page
Attachments:
Message as email
+ (text/html)
+ (text/plain)
Delete this message
Reply to this message
Author: Michael Butash
Date:  
To: plug-discuss
Subject: Re: A question regarding LAG/Ping on game servers.





Depends on your nic (or upstream)
      buffers, both rx/tx, and how efficient they, or upstream process
      opening a socket, can consume data.  If the socket (ie. tcp/udp
      ports) fill first, this usually amounts to kernel-level
      congestion.  If the hardware buffer on the card and/or queues in
      use fill first, than it's a hardware issue.


      Either way, queues and buffers are tunable usually, no one really
      bothers however.


      Longer your bits stays in queue, or eventually drop, you introduce
      "lag".  Lag is a result of buffers not answering in a timely
      fashion, or being forced to drop and signal a retransmit, and this
      could be your nic, theirs, or your upstream modem/provider.  


      The fact you're saying that you see "everyone" slow down means
      probably yours is to blame more than everyone else.  In your case
      tx-queue buffers are likely overloading as it can't transmit the
      data fast enough, or you need to increase your tx buffers at the
      kernel level.  On linux, that's a sysctl, namely wmem stats under
      net:


      mb@host:~# sysctl -a | egrep 'rmem|wmem'

      net.core.rmem_default = 212992

      net.core.rmem_max = 212992

      net.ipv4.tcp_rmem = 4096        87380   6291456

      net.ipv4.udp_rmem_min = 4096

      net.core.wmem_default = 212992

      net.core.wmem_max = 212992

      net.ipv4.tcp_wmem = 4096        16384   4194304

      net.ipv4.udp_wmem_min = 4096

      vm.lowmem_reserve_ratio = 256   256     32


      Also, txqueuelen under your interface as well can affect this. 
      You'll usually see drops here from exhaustion, but increasing this
      is necessary for speeds above 1gb.


      mb@host:~# ifconfig em1

      em1       Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx

                UP BROADCAST MULTICAST  MTU:1500  Metric:1

                RX packets:0 errors:0 dropped:0 overruns:0 frame:0

                TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

                collisions:0 
txqueuelen:1000

      Windoze does this somewhere between the registry and the driver
      stack as a gui for them.  


      Lots can, if tcp-based, tcp congestion algo's can behave
      differently too, if your app/game uses tcp.


      Plenty of how-to's out there around stack tuning, usually more for
      a) weird apps or b) apps that really need that 1g/10g/40g
      connection requiring the tuning for more.


      Any way you look at, efficient pipelining of data is necessary, or
      you get lag, drops, etc mentioned above.  Most consumers, as well
      as developers of network code just expect infinite bandwidth and
      omniscient handling of random flows.  Truth is, most nics suck
      having minimal buffers, especially consumer ones and most server
      ones too.  Apps are terrible about streamlining data for network
      use too (some usually let you tune your "bandwidth" for this).


      Bigfoot "gaming" nics do this, introducing a linux soc to do queue
      and congestion management to avoid buffer drops between your pc
      and the ethernet connection.  They do a form of "auto quality of
      service", prioritizing certain things over another.


      Queue management, ala QoS (quality of service) is meant for
      prioritizing traffic like voice/video, but also any "realtime"
      protocols, which most games utilize underneath as udp traffic. 
      Look up tc (traffic control) under linux, or qos gpo policies on
      windoze, dscp/cos, etc.


      -mb



      On 07/03/2015 05:10 PM, Stephen Partington wrote:



Mostly it's in how they are making cheaper gigabit
        ethernet. 

On Jul 3, 2015 4:40 PM, "Wayne D" <>
        wrote:

I have a
          question regarding ping times in game servers:



          I've noticed when playing several online games that ping times
          go up as more players connect.

          I also run a small game server here at my house and have
          witnessed this issue first-hand. The processing overhead of
          the game is actually quite small even with 12 players and I
          suspect it would not go above 25 to 30%.


          This issue doesn't make any sense to me when we are talking
          about no more than 12 players connected.

          My bandwidth both up and down are sufficient to handle the
          traffic with plenty of room to spare.


          The extra overhead on a single pipe in my mind should not make
          any difference considering the processor speeds are so much
          faster than the data travelling across the Internet. From my
          perspective, I should be able to connect 100 different players
          and see no change in lag/ping TO EACH PLAYER.  I understand
          that all players have to go through the protocol stack on the
          net card but that shouldn't matter correct?


          Or, is the actual problem the network card itself? Is that the
          reason for the lag? If so, is there a type of card that I
          should be looking for “high throughput"?


          Could somebody elaborate on the mechanics of this?


          Thanks in advance.

          ---------------------------------------------------

          PLUG-discuss mailing list - 

          To subscribe, unsubscribe, or to change your mail settings:

http://lists.phxlinux.org/mailman/listinfo/plug-discuss





---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss




---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss