df report

sinck@ugive.com sinck@ugive.com
Mon, 11 Dec 2000 07:49:11 -0700


\_ The fastest way to regain space from a big file is to null it out, e.g.
\_ 
\_ '>/the/file'
\_ 
\_ After that remove it. This also works if you have w perm on a file, but
\_ not on the dir it's in.

This also has the advantage of not changing the inode (?) so that if
the file is still open as a log file, the logging process doesn't
notice any difference unless it checks.  I discovered this on a
httpd error_log that I rm'd out from under a NS server.  Suddenly all
the error problems went away...even the ones I forced.  Once the
server got kicked, it was back....

\_ Check for .. files that don't point to a parent dir. 
oo oo... how would you get those?  Directly write to the dir file
yourself?  Or are those only good in the vicinity of mount points?


\_ Check for dirs that are large, e.g. probably anything greater than 4096,
\_ as they are a probable indication of a bunch of little files.
Well, depending on the file system, which I know enough about to use
:-), it could be that an arbitrarily large file could have recorded
lots of directory information (block #1, ..., block #4e46, ....) in
the dir file and made it grow without bounds.

I remember my good CS advisor (as opposed to the bad one) saying in
the good ol' days it was common practice to abuse someone else's
account by creating and removing files in one of their dirs.  The dir
file would grow with entries (since it didn't/doesn't truncate/fill
empty) but there would never be any files and the person would be over
hard quota.  :-)

I also seem to remember a TA logging in to my account class-based once
and doing something evil like 'chmod -R a-rwx ~'.  That makes
everything real, real hard to cope with when you're just learning.
:-)

David