To delete all the files in a directory, when there are a large number of files, but dont delete any sub-directories or files in them: ls | xargs rm # rm will fail on directories, because you did not use "-r" Need to be selective: find . -maxdepth 0 -name "" | xargs rm # or find . -maxdepth 0 -name "" -exec rm "{}" ";" # Slower Done care about subdirectories and the files in them too: cd .. rm -r # Probably really want "-rf", its quieter, but more dangerous. Want to be "quick" about it: cd .. mv .old # "mv" is fast (on the same file system), "rm" is slow rm -rf .old & # removing in the background, you can now go do other stuff. # including creating a new and using it. Good Luck. Bob. On Sat, 22 Jun 2002 12:05:04 -0700 plug-discuss-request@lists.plug.phoenix.az.us wrote: > > Message: 3 > Date: Thu, 20 Jun 2002 21:55:30 -0700 > From: Austin Godber > To: plug-discuss@lists.plug.phoenix.az.us > Subject: Re: oops... to many files and can't clean up... > Reply-To: plug-discuss@lists.plug.phoenix.az.us > > Perhaps it is was a shell limitations since that would be what was > responsible for the regular expression expansion. Sounds like most > the problems you had were when you used *. > > Austin > > "John (EBo) David" wrote: > > > > sorry to reply to my own message, but a quick update... > > > > I went to another xterm and tried get a second look at things. From > > there I was able to see and remove the rest of the files (no idea > > what happend to that one xterm to muck with ls/rm). Anyway, things > > are cleaned up, and that even seemed to fix things in the original > > xterm that was causing the problems... > > > > I may well still reboot and FSCK... Any idea what could have > > screwed up? I remember that there used to be a limit of 10,000 > > files/directory or inode. That is why I was originally concerned > > with having more than 2.5 times that in a single directory. > > > > EBo -- > > > > "John (EBo) David" wrote: > > > > > > ummm.... > > > > > > I have a unit and regression test suite for my ecological modeling > > > virtual machine. I needed to bump up one of the tests to run for > > > a longer time for model testing. Problem was that I forgot that I > > > am creating an image dump for *every* variable specified each and > > > every iteration... start_time=0, stop_time=25, dt=0.01... that is > > > 2,500 images for umm... looks like 8 variables, and there are 15 > > > other unit tests... > > > > > > So now I find that I have over 25,000 files in a single directory. > > > oops. Ok, off to clean them up.... > > > > > > First, ls and rm complain that there are to many files to "rm > > > *.pgm", so I go though and delete them by group name... ok, > > > appears to go ok. Now I am finally able to "rm *.pgm" so they > > > should be clean up. Problem is that once I do that I still have > > > hundreds of pgm files in the directory that "ls" reports, but an > > > "ls *meta_pop*" does not. I am affraid that I have corrupted the > > > file system or something. > > > > > > any suggestions? > > > > > > thoughts: > > > > > > shut down the machine, reboot single user, fsck ever partition > > > (including XFS partions), and recite some prayer to Boolean... > > > > > > other ideas, thoughts, intuitions as to what happens when creating > > > 10's of thousands of files in a single directory by accident? > > > > > > EBo -- > > ________________________________________________ > > See http://PLUG.phoenix.az.us/navigator-mail.shtml if your mail > > doesn't post to the list quickly and you use Netscape to write mail. > > > > PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us > > http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss > > -- > Austin Godber > godber@asu.edu > Rotten Philomathian -- Robert A. Klahn robert@kint.org AIM: rklahn "Hope has two beautiful daughters: Anger and Courage. Anger at the way things are, and Courage to struggle to create things as they should be." -- St. Augustine