Filesystem Optimization
Nathan (PLUGAZ)
plugaz at codezilla.xyz
Thu Feb 6 12:27:39 MST 2020
I realize ext4 does not easily fragment, but when you have a large
volume with lots of files of differing size, how can you optimize it?
I have a 2TB mirrored array that has hundreds of thousands of less than
12KB files and hundreds of files that are more than 1MB and of course
lots of movies and such which can be 1 to 4GB. Over the years it has
gotten really slow.
I have a shell script that basically runs rsync against my home
directory and pushes it to a specific folder on my file server (part of
this 2TB array).
Typically the script runs in the wee hours when I'm asleep. But the
other day I decided to run it just to watch it and see what happens. It
was horrendously slow!
I tried timing it. I ran time { rsync -av /home/myuser/.cache/
remote:/backup/dir/.cache/; } and after 75 minutes I cancelled it. There
are 46k files in that folder and it is roughly 2GB... 75 minutes it
wasn't finished. Now this is running over an NFS link just FYI.
So I created a 4GB tmpfs and mounted it where I needed and ran my time
backup again and it took 2 minutes and 6 seconds. Obviously my network
is not the issue.
So today I'm trying to find places to store 2TB of data so I can
rearrange things, but I'm wondering...
Is there a program that watches and optimizes placement of files on a
hard drive? I know these exist for windows, but linux?
More information about the PLUG-discuss
mailing list