AMD vs Intel memory managemement

James Mcphee jmcphe at gmail.com
Mon Jun 3 14:43:29 MST 2013


Its getting a little off topic, but I do enjoy a good discussion about
architecture.  The problems I run into with such are in implementation
rather than design.  Intel's large-scale implementation has taken some hits
recently due to the decision of process assignment, which causes
multiprocess apps to get spread around the sockets, and their general lack
of interconnects.  AMD's hyperthreading is much better, but you have
problems with CPU speed, thermal and electrical tolerance, etc.

Anyway, so when applied to the idea of multicore, single-socket systems...
 Multiple memory controllers have the same problems, where you have to
probe them or block on them.  Ganged vs Unganged in benchmarks is pretty
straightforward.  Ganged seems to only be the recommended route to go when
your hardware has trouble with unganged, which is more complicated, so
early hardware had trouble.  That might not be completely true with your
particular use case, but you at least have the option :)

I'm an AMD zealot at home.  I do enjoy the whole underdog thing they have
going there.  But my Intel friends are almost always churning out more
horsepower than I am, and their generally superior chips shows in my
choppiness of games, etc.  Apples to apples, I'd say Intel is a better, if
more expensive, choice.  That's purely for desktop, of course.
 Server-side, AMD tends to perform better in hugely multithreaded
applications due to greater/more mature/shinier interconnect capabilities,
right up to where the software craps out, as most does, at a certain number
of cores.  Even the best filesystems tend to crap out at 8 concurrent
threads, though XFS says it does better with the latest patches.  So again
you have a choice, and since we're generally blocked by software for
horizontal scaling.

Intel is the more general option and having just 1 type of hardware to
support reduces complexity, unless you can simply not survive on the
relatively poor capabilities of extremely large Intel systems at this time.
 Perhaps Intel will finally stop fiddling with their chips and allow the
mobo manufacturers to create decent controllers.  Or perhaps AMD will
finally die in the markets and leave us without an option.


On Mon, Jun 3, 2013 at 2:09 PM, Stephen <cryptworks at gmail.com> wrote:

> I have not run CentOS on my current system, but ununtu 13.04 ran like a
> champ until i broke it. all 16 GB (2x8 set up as dual channel).
>
> I can try to install Cent and see what it tells me. it may be that there
> is a kernel param that is not lined up right.
>
>
> On Mon, Jun 3, 2013 at 1:46 PM, Nathan England <nathan at nmecs.com> wrote:
>
>> **
>>
>>
>>
>>
>>
>> Your explanation seems about right to me. The problem though, with a
>> single processor with multiple cores, they are all using the same memory
>> interconnect.
>>
>>
>>
>> While in theory, and quite possibly in true NUMA systems, this is a more
>> efficient way to handle memory management with tasks assigned to a specific
>> processor ( I would imagine this would be huge for VM hosts ) but as far as
>> I know, there are no real world examples or tests that show this actually
>> works any faster with multiple cores.
>>
>>
>>
>> But why does CentOS not register all of my memory? Why less than 3/4 of
>> it? I have actually had my machine swap due to the work load where as if it
>> had access to the other 3 GB of ram it would not have swapped!
>>
>>
>>
>> Maybe I should have gone with a single 8GB stick of ram instead of dual
>> 4GB. Silly me!
>>
>>
>>
>> Nathan
>>
>>
>>
>>
>>
>> On Monday, June 03, 2013 13:27:18 Nadim Hoque wrote:
>>
>> If i recall AMD started doing NUMA which each core gets a dedicated
>> amount of memory that is tied to it. The plus side is that when the core
>> needs something in its own memory region it does not need to put the
>> request in the queue like in non-numa and gets it faster. The down side is
>> if it needs data in a memory region that belongs to another core it will
>> take longer since it essentially has to ask that core for that data. In
>> non-numa architecture the entire memory space is allocated to all cores
>> which means that each core can access memory with out asking another for
>> data. The problem with this is that all memory requests is put in a queue
>> and the core has to wait until the memory controller is able to process the
>> request.  For many core and lot of memory systems you are mostly better off
>> with NUMA. Correct me if I am wrong though.
>>
>>
>>
>> On Mon, Jun 3, 2013 at 7:25 AM, Stephen <cryptworks at gmail.com> wrote:
>>
>> Not really, Dual channel mode means you can read and write to both Banks
>> of memory at the same time (aka Ganged). Single Channel means you treat all
>> ram as a single bank reading and writing to one and then the other. think
>> Raid 0 vs JBOD if that helps.
>>
>> I personally have had 0 issue with greater than 4 GB of ram in a machine
>> with Linux and a 64 bit kernel. and i have worked with multiple
>> distributions over the years back and forth.
>>
>> the main difference between Intel and AMD i have seen since the core i
>> series CPUs were released is that AMD still has wicked fast memory
>> performance but Intel wins most everything else.
>>
>> If you have multiple processors you will want to look for numa. This
>> allows inter processor communication for ram access.
>>
>> It should not matter if you are running ganged or unchanged your is
>> should see all ram installed with the exception of the PCI/pcie/chip set
>> nibbling 100 to 700mb for doing its thing in consumer chipsets.
>>
>> On Mon, Jun 3, 2013 at 6:36 AM, keith smith <klsmith2020 at yahoo.com>
>> wrote:
>>
>>
>>
>>
>> I found this in an on-line discussion:
>>
>> Ganged = dual channel mode for ram. All cores get access to 100% of the
>> ram.
>>
>> unganged = single channel. Each core gets access to a stick of ram.
>>
>> Is this correct?
>>
>>
>> ------------------------
>> Keith Smith
>>
>> --- On Mon, 6/3/13, Nathan England <nathan at nmecs.com> wrote:
>>
>>
>> From: Nathan England <nathan at nmecs.com>
>> Subject: Re: AMD vs Intel memory managemement
>> To: "Main PLUG discussion list" <plug-discuss at lists.phxlinux.org>
>> Date: Monday, June 3, 2013, 1:35 AM
>>
>>
>>
>>
>>
>>
>>
>> Yeah, it's a wonderful thing AMD calls "unganged" mode. I have 8 GB of
>> ram in my server and the motherboard has enabled "unganged" mode to be more
>> efficient. CentOS only recognizes 5.8 GB of ram and I cannot turn off
>> unganged mode.
>>
>>
>>
>> I love it...
>>
>>
>>
>> </sarcasm>
>>
>>
>>
>>
>> On Sunday, June 02, 2013 17:46:19 keith smith wrote:
>>
>>
>>
>>
>> Hi,
>>
>> After that great thread on 32bit vs 64bit, I was wondering if it would be
>> beneficial at this point to drill down to the CPU level : AMD vs Intel.
>>
>> We had a great thread a while ago the AMD CPU, however I do not think
>> that thread covered memory management.
>>
>> I almost went for an AMD CPU this go around (I have a couple from prior
>> purchases), however after hearing that AMD does some weird memory
>> management at the core level, assigning memory by the bank to each core, I
>> thought I would go with an Intel CPU.
>>
>> If I understand this correctly, It sounds like under some or most
>> circumstances the server will lose a portion of the total memory because
>> under AMD RAM is assigned at the core level and bank level.  I assume Intel
>> uses memory as a pool.  Need memory just grab some until it is gone.
>>
>> Any thoughts on this?
>>
>> Thanks!
>>
>> ------------------------
>> Keith Smith
>>
>>
>>
>> --
>>
>>
>>
>>
>>
>>
>>
>> Regards,
>>
>>
>>
>> Nathan England
>>
>>
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> NME Computer Services http://www.nmecs.com
>>
>> Nathan England (nathan at nmecs.com)
>>
>> Systems Administration / Web Application Development
>>
>> Information Security Consulting
>>
>> (480) 559.9681
>>
>>
>>
>>
>> -----Inline Attachment Follows-----
>>
>> ---------------------------------------------------
>> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org<http://mc/compose?to=PLUG-discuss@lists.phxlinux.org>
>> To subscribe, unsubscribe, or to change your mail settings:
>> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>>
>>
>> ---------------------------------------------------
>> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
>> To subscribe, unsubscribe, or to change your mail settings:
>> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>>
>>
>>
>>
>> --
>> A mouse trap, placed on top of your alarm clock, will prevent you from
>> rolling over and going back to sleep after you hit the snooze button.
>>
>> Stephen
>> ---------------------------------------------------
>> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
>> To subscribe, unsubscribe, or to change your mail settings:
>> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>>
>>
>>
>>
>> --
>>
>> Nadim Hoque
>> Systems Support Analyst
>> Engineering Technical Services
>> Arizona State University
>> Cell: 480-518-6235
>>
>>
>>
>> --
>>
>>
>>
>>
>>
>>
>>
>> Regards,
>>
>>
>>
>> Nathan England
>>
>>
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> NME Computer Services http://www.nmecs.com
>>
>> Nathan England (nathan at nmecs.com)
>>
>> Systems Administration / Web Application Development
>>
>> Information Security Consulting
>>
>> (480) 559.9681
>>
>>
>>
>> ---------------------------------------------------
>> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
>> To subscribe, unsubscribe, or to change your mail settings:
>> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>>
>
>
>
> --
> A mouse trap, placed on top of your alarm clock, will prevent you from
> rolling over and going back to sleep after you hit the snooze button.
>
> Stephen
>
> ---------------------------------------------------
> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
> To subscribe, unsubscribe, or to change your mail settings:
> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>



-- 
James McPhee
jmcphe at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.phxlinux.org/pipermail/plug-discuss/attachments/20130603/20767c78/attachment.html>


More information about the PLUG-discuss mailing list