Hi, Michael.
Michael Havens wrote:
> That went right over my head!
>
> On Friday 16 January 2004 11:58 pm, Ted Gould said:
> . . .
> http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Partition.html#FRAGMENTATION
Here are some comments that might help a little:
Disks are broken up into numbered chunks or blocks of some
fixed size.
A directory translates a file name into a list of
block addresses, so you can read through the file.
The operating system (OS) maintains the disk directory,
and also a list of all the available chunks of storage.
When a program goes to create a file, the OS builds a
directory entry and assigns an initial area of storage.
If that area gets filled, the OS adds another chunk --
somewhere on the disk. These can be scattered all over,
so the head on the disk unit has to get jerked all around
seeking for the right tracks as you read through the
file.
Windows systems allocate one chunk at a time. I gather from
the article that a Unix/Linux system allocates a bigger
space, so the chances are better that the whole file will
fit in that space -- in a consecutive series of chunks.
Any bl
If your computer has lots of memory, it can keep a lot of
disk blocks in memory, so you can read them without
going to the disk. This is called buffering or caching.
Unix does this better than MS.
On any system, when a program finishes writing a file,
on the average it stands to reason that the last block
will be only half used, and since the directory works in
terms of whole chunks there is no way to let another file
use that wasted space. (Some file systems have a way for
two files to share a tail-block, but most can't.)
If you have a million tiny files using a million big blocks,
then those blocks are mostly wasted. If you have small
blocks, then the system has to fetch lots of blocks to get
through the file, which can be inefficient. So if you
really need to optimize, then block size is important.
But most of us don't need to worry about that.
HTH,
Vic