Intel Question

David P. Schwartz davids@desertigloo.com
Tue, 30 Jan 2001 19:47:21 -0700


David Demland wrote:

> I have heard and read

A citation would be nice.  Was the article dated April 1st by any chance?

> that because of the way Intel chips are setup it is
> easier to have a security problem than with a Motorola chip. This would mean
> that a firewall that is comprised of a Motorola chip is better than an Intel
> chip. This is true? The assumption here is that both are running Linux.
>
> Thank You,

Excellent point!  In fact, some chip chemistrys are so sensitive to the gravitational pull of the moon, that when it's a full moon and
the moon is within ten degrees of the zenith, the chips are far more suceptible to security breeches than at all other times.  At these
critical times, they can also take on certain behaviors often referred to as "ghosting", which is the repetition of blocks of
instructions that radiate EMI/RFI patterns that can induce migraine headaches in people sitting withing a 1.5 meter radius of the CPU.

Back to reality... I spent time working for Intel, then Moto.  The folklore I heard from engineers and marketing bozos at Moto about
supposed faults in Intel's products would make your head spin!

I don't want to start an architecture war here, but I've got a few examples.  (This was the marketing battle they engaged in.)

One example involved the DRAM controller chip that Intel released about the time the 286 was launched.  One of their production runs had
a glitch in the mask or production process that caused the DRAM controller chip to lock up when the part got too warm.  Moto reps used
this ammo to claim that Intel's x86 family CPUs would periodically lock up if you used them with DRAM.  Intel eventually redesigned the
DRAM controller, and now that logic (generally speaking) is inside of just about every northbridge controller on the market.

At the time, Moto didn't have a DRAM controller, so you had to use a third-party chip (eg., Intel's) or discrete logic if you wanted to
use DRAMs (instead of SRAMs) with a 68k part.  Thus fueled another rumor that "static RAMs are better than dynamic RAMs" simply because
you needed an extra controller chip to refresh the DRAMs, which allegedly slowed the system down.  The innovative thing about Intel's
DRAM controller chip was that it did interleaving between two banks of DRAMs, allowing bytes (or words) to be strobed onto the CPU bus
on alternate cycles of the busclock.  Since you couldn't load two bytes/words onto a one-byte/word-wide bus anyway, they effectively
sidestepped the whole DRAM bottleneck issue -- theirs was the only DRAM controller on the market that did that, for a long time.  That's
also why the SIMMs in x86 motherboards (prior to Pentiums that used PC-100 DIMMs) always had to be added in pairs with the same size and
clock speed ratings.  Apple boxes required the addition of much more expensive static RAMs. (Today, static RAMs are still about 1.5-2x
the cost of DRAMs of equivalent densities.)

If you recall, the original 68k design touted a 32-bit internal architecture, but it exposed a 16-bit address and data bus, just like
the 8086 and 286 (which weren't touted as 32-bit architectures).  It wasn't until they released the 68020 that they supported
independent 32-bit address and data buses, while the 486 accomplished the same thing.  In spite of this physical limitation of chips
prior to the '020, Moto still claimed that theirs was "a full 32-bit CPU" while the Intel chips were "only 16-bit" machines.  (They had
a tough time characterizing the 8088, which was created in the spirit of the 68k -- a 16-bit architecture with an 8-bit data bus.)
Blah!  The 386 could access a larger address space than the 68010 without any external logic required (i.e., bean counters loved it).
The '020 finally exposed a so-called 'flat' 32-bit address space that competed directly with the 386/486, but that was more than 10
years after the 68k family's architecture and it's benefits were announced.