What Distro to use
Kevin Buettner
kev@primenet.com
Fri, 3 Nov 2000 12:21:23 -0700
On Nov 3, 3:21am, der.hans wrote:
> Am 03. Nov, 2000 schwäzte Kevin Buettner so:
>
> > So, there would've come a time when Red Hat (and other distros, if
> > they want use the latest/greatest gcc) would've had to bite the
> > bullet and break ABI compatibility with previous versions. Red
> > Hat's choice of gcc-2.96 means that they'll have to do it twice
> > which is something that probably should have been avoided.
> >
> > However, I think Red Hat would've been criticized no matter what they
> > did on this compiler issue. If they would've been more conservative
> > and stayed with egcs-2.91.66 which is getting rather long in the
> > tooth, they would've been criticized by C++ developers for not
> > releasing a more modern compiler which addresses their concerns.
>
> I'd think stability is more important :).
Sure. But if you can't build your mission critical C++ application
(shudder) because the tools that you're trying to use are antiquated,
that's not much good either.
(I'm having trouble defending this decision since I don't really
agree with it.)
> Couldn't they have done a libc5-libc6 transitionary thingy?
FWIW, I'm running a number of applications compiled under RH6.X and
am not seeing any problems. As I understand it, the ABI incompatibility
is in C++ name mangling.
I would expect them to have bumped the version number on any shared
libaries where it made a difference. (ldd is a useful tool for
checking out which libraries and their versions an application will
try to use. FWIW, if ldd doesn't run, the problem is likely that the
dynamic linker's name has changed. If this happens, you can use
``objdump -s -j .interp program-name'' to see what it ought to be.
Sometimes, you can just create a symbolic link to your existing
dynamic linker and things will work again. Sometimes.)
In general, it's reasonable to expect that you'll be able to run
binaries from recent releases of the OS without problem. I.e,
you should be able to run RH5.X or RH6.X binaries on RH7. But,
going the other direction won't work.
> What I mean is ship with egcs-2.91.66, but make gcc-2.96 available?
> This probably would've been tons more work, but I'd think it
> wouldn't be too bad. Not that I know anything about setting up
> multiple compiler environments on a single box :).
The only thing I can say about this is that things get pretty weird
when you change the ABI. This has happened to me a few times on IA-64
and life really improves once the OS guys rebuild everything... BTW,
you can protect yourself for certain critical applications such as the
shell by compiling it statically.
If the change to the ABI is minor, most things will just continue to
work as normal when you rebuild. But to be really safe, you need to
recompile all of the libraries that you're using.
> The user space kernel enhancements that are coming out will greatly
> increase the ease of such things for developers, right?
I don't follow. Explain?
> The other question is whether or not they needed to build packages with
> 2.96 or whether they could've (and maybe did) stayed with 2.91? Are there
> compatability probs with that?
I think they compiled nearly everything with 2.96.
The kernel is still being compiled with egcs-2.91.66. The binary is
called ``kgcc''. Those of you out there who are building your own
kernels on RH7 should ALWAYS be using kgcc. (The kernel is very
sensitive to which compiler is used; apparently newer versions of gcc
will miscompile the kernel on certain architectures. (Actually, the
only architecture that I know of that needs a really old gcc is x86.)
> Not trying to second-guess RedHat, rather just interested in the tech. You
> mention that changes are definitely happening, so those of us who use
> other dists are gonna have to go through this at some point as well. Those
> using RH are also going to have to go through it again due to the changes
> you mention in 3.0.
Yep. Hopefully, there'll be fewer bumps in 3.0 since it'll be an
FSF sanctioned release and will have gone through a lot more testing
before being released.
> If we better understand it, we might be better prepared to deal with it,
> which is especially important for those who have management that isn't pro
> Linux.
Yeah. If you want a rock solid stable OS, you shouldn't be living on
the bleeding edge. Generally, the best thing to do when there are
major changes like this (kernel, compiler, libraries, etc.) is to let
other folks be the guinea pigs. Right now for Red Hat, that means you
should continue to run 6.2 with patches and let other folks find the
problems in Red Hat 7. But most of you know all of this already; .0
releases of anything are generally less stable than the ones that
follow.
I'm running Red Hat 7 on one of my machines to gain some confidence in
it. The only problem that I've had is with my scanner, but since I'm
running a bleeding edge kernel, I don't know where to place the blame.
(I'm seeing SCSI timeouts with it. OTOH, if I run a version of
scanimage that I build myself on 6.2, it starts working - for a while;
and then I'll see the SCSI timeouts again. Last night, I gave up
trying to figure out the SCSI timeouts and hooked it up via the USB
port instead. I'd never used USB before and it took me a while to
figure out which modules needed to be loaded and how to reconfigure
the SANE configuration file for my HP scanner, etc. Anyway, it's
working again.)
Kevin