On Nov 23, 2007, at 7:59 AM, Darrin Chandler wrote: > On Fri, Nov 23, 2007 at 07:29:38AM -0700, Chris Gehlker wrote: >> And I'm increasing under the impression that it isn't except the very >> small area, basically the virtual memory manager, where it needs to >> be >> to support 64-bit applications. Windows got this wrong - basically >> they went to an all 32-bit or all 64-bit world - and I was initially >> under the impression that Linux did too. But now I think that linux >> got it right. I don't know why Windows got it so wrong, those people >> aren't stupid. If I had to speculate, AI'd guess it had something to >> do with their ABI. Maybe you simply can't link 32-bit libraries into >> 64-bit code in the Windows world. > > I've been reading along in this discussion, and it's been > interesting so > far. :) > > If it's best to use 32-bit code unless you have a compelling reason > for > 64-bit, doesn't the same hold true for 16-bit? It would be twice as > compact as 32-bit code, so twice as much would fit in cache. That's a keen observation and it leads to a story. When I got into computing, everybody knew that the x86 architecture was dead. One of the secrets to fast execution was fast decode and the secret to fast decode was a simplified and consistent instruction set. So the A team engineers went to work on Very Long Instruction Word architectures like Itanium and Power. Then a funny thing happened. CPUs got fast faster than cache memory. Cache memory got fast faster than main memory. Main memory got fast fast faster than disk. And while x86 instructions may be quirky, they are also terse. They are the 16-bit instructions you mention. So now we have chips like the Core 2 family that are essentially a bunch of silicon dedicated to translating x86 instructions into Itanium instructions bolted in front of an Itanium. And they actually work because CPUs have gotten so fast that they can do that translation as fast as the bus can feed them instructions. > IIRC, there > are reasons why you want 32-bit for Intel processors to run a "real" > OS, > having less to do with bit width than the mode the processor is in. According to Wikipedia, the x86-64 architecture doesn't have "modes". It supports all the x86 instructions except for a few that were never actually used by compiler writers. In addition it has some 64 bit instructions. I'm still unclear as to whether there were/are some 64-bit Pentiums that do have modes. I increasingly doubt it. > > > I have two 64-bit boxes running 64-bit OS. They are both in colo, and > they run until I reboot them (i.e., they are quite stable). Two points > about that: 1) neither box has an Intel processor (why use Intel for > 64-bit?! they seem to suck at it), and 2) the boxes are running > neither > Windows nor Linux (I bring that up because I can't comment on Fedora > stability or performace in 64-bit). I also have a couple of old 64-bit > boxes at home. They are old enough to be slow, but one of which I used > recently for quite a while as a desktop machine, including running > Firefox, and it worked fine. Once again, not Windows or Linux, and not > an Intel processor. > > I just find it interesting that there's all this talk of 64-bit on > Intel > processors. I never considered them much of a player in 64-bit. I'd > love > to hear how and why you people have done so. I don't want flames: I'm > truly curious. I think it's the whole terse is faster than elegant thing. If some technology can along that sped up disk and main memory relative to the CPU, the interest would fade again. -- A young idea is a beautiful and a fragile thing. Attack people, not ideas. --------------------------------------------------- PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us To subscribe, unsubscribe, or to change your mail settings: http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss