core files
Kevin Buettner
kev@primenet.com
Thu, 18 May 2000 16:48:05 -0700
On May 18, 7:27pm, Thomas, Mark wrote:
> I need to tell Bash to save a core file when an application segfaults. Can
> someone tell me how this is done?
ulimit -c somevalue
E.g, "ulimit -c 20000" will allow you to create 20MB core files.
On several of my systems, the installed bash is actually bash-2.X. If
this is the case, you'll need to modify your scripts in /etc to use
"ulimit -S -c 0" instead of "ulimit -c 0". (On my systems, the
critical file is /etc/rc.d/init.d/functions that needs to be modified.)
The -S specifies a soft limit. If you don't use -S, the hard limit is
set for some of daemons which run early on which in turn are inherited
by your login process. If this is the case, you (as a normal user)
will not be able to change the core limit setting with ulimit.
If you're using a stock Red Hat system, you won't have this problem
yet. One of the OS engineers tells me that if they install bash-2.X
at all, they will install it as bash2 because too many other scripts
break. So don't expect to see bash-2.X installed as bash on Red Hat
systems any time soon.
The other thing worth mentioning on this issue is that on many systems
/etc/profile contains a ulimit command which sets the default ulimit
for all users. You'll want to check to see if your /etc/profile does
or not, and if so what it is set to. Finally, you'll want to decide
whether the use of the -S switch in this setting is appropriate or
not. (I think it is, but there may be times when you don't want to
allow users to create core dumps at all.)
Kevin
--
Kevin Buettner
kev@primenet.com, kevinb@redhat.com