Linux segmentation fault message

Carruth, Rusty Rusty.Carruth at smartstoragesys.com
Sat Jun 22 12:41:05 MST 2013


No, I didn't say it couldn't find the interpreter, I said that it found the WRONG one (or a different one, anyway) that works differently than the one he's using to test on.

Also, on Solaris you probably have word alignment issues you didn't have on the other machine.  Solaris (at least on sparc) has some pretty critical requirements on pointers pointing to the right alignment.

(IIRC, x86-style processors will handle that, just slower than a 'correctly-aligned' access).

E.G. if your memory address is 0x1001 as a byte address, but your computer requires word (say 16-bit words) alignement, trying to get a word from that address WILL cause a segv (AFAICR).  that is:

int * foo = 0x1001;
printf("My foo (0x%x), on a byte alignement, will segv here on a word-align machine %d\n",foo,*foo);



-----Original Message-----
From: plug-discuss-bounces at lists.phxlinux.org on behalf of Matt Graham
Sent: Sat 6/22/2013 12:26 PM
To: Main PLUG discussion list
Subject: RE: Linux segmentation fault message 
 
>> 'qsearch' works fine on my local computers, but when I try to run it
>> on my web host I see this error message:
>> ~(location) line 6: 26955 Done
>>        fgrep -y "$name1" q-hid
>>      26956 Segmentation fault | fgrep -y "$name2" > tempz
From: "Carruth, Rusty"
> you left off the first line:
> #!/bin/bash

True, but it should've died immediately if it couldn't find the interpreter,
not a few lines down.

Is this a shared server, or a VPS?  I ask mostly because a long long time ago,
I had a little C program that malloc()ed an array, filled it with random
numbers, and ran a set of tests on it.  This program always segfaulted after a
few iterations when run on the shared Solaris machines.  On my home box, it
always ran to completion.  Same C, same gcc invocation.  The only thing that
made any sense was that the little program was consuming too much RAM or CPU
and getting sent a SIGSEGV by whatever "prevent runaway user processes from
eating everything" system they had set up.  Using nice on it didn't seem to
help, and I was malloc()ing a few hundred K at most.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.phxlinux.org/pipermail/plug-discuss/attachments/20130622/5e885a4d/attachment.html>


More information about the PLUG-discuss mailing list