SCSI & storage

Kevin Buettner kev@primenet.com
Fri, 23 Mar 2001 01:22:58 -0700


On Mar 22,  1:47pm, Kevin Buettner wrote:

> If anyone's interested, I could also run these tests on the same box
> using NetBSD 1.5, Solaris 8, and Unixware 7.  It might be
> interesting to see how the commercial OSes do...

Well, as it turned out, I was interested in seeing whether other
implementations of Unix use memory to cache file data.  I was also
interested in seeing how well these other OSes make use of this
cache.  Below are my findings...

...........................................................................

The table below shows the time (in seconds) taken to run the following
command back-to-back (i.e. twice in a row) on various OSes:

    time find linux-2.4.2 -type f -print | xargs wc > /dev/null

This test measures how well the OS uses extra memory to cache file
data.  The test data are the sources to the stock Linux 2.4.2 kernel
available from ftp.kernel.org.  The test data takes up roughly 108MB
of disk space.

Hardware is 1.1GHz Athlon w/ 768 MB memory.  There are two disks in
the machine.  Both are 7200 RPM EIDE drives made by Western Digital. 
One drive is 30GB; the other is 40GB.  The 40GB drive was used for the
Solaris 8 and NetBSD 1.5 tests; the 30GB drive was used for the other
tests.

                | Real #1 | User #1 |  Sys #1 | Real #2 | User #2 |  Sys #2 |
----------------+---------+---------+---------+---------+---------+---------+
RH Wolverine    |   19.85 |    1.43 |    0.87 |    1.65 |    1.21 |    0.44 |
----------------+---------+---------+---------+---------+---------+---------+
FreeBSD 4.2 (1) |   21.46 |    1.59 |    1.30 |    7.24 |    1.52 |    0.74 |
----------------+---------+---------+---------+---------+---------+---------+
FreeBSD 4.2 (2) |   31.00 |    1.43 |    1.25 |    4.93 |    1.45 |    0.75 |
----------------+---------+---------+---------+---------+---------+---------+
NetBSD 1.4      |   22.87 |    1.19 |    1.06 |   22.38 |    1.19 |    1.07 |
----------------+---------+---------+---------+---------+---------+---------+
NetBSD 1.5      |   26.38 |    1.21 |    1.07 |   24.61 |    1.17 |    0.96 |
----------------+---------+---------+---------+---------+---------+---------+
Unixware 7      |   40.86 |    1.58 |   17.26 |    2.40 |    1.28 |    1.02 |
----------------+---------+---------+---------+---------+---------+---------+
Solaris 8       |   55.51 |    1.92 |    2.09 |    2.94 |    1.82 |    1.04 |
----------------+---------+---------+---------+---------+---------+---------+

Notes:
 (1) Using ext2fs for the filesystem
 (2) Using ufs for the filesystem

uname -a data
-------------
Wolverine:
Linux mesquite 2.4.1-0.1.9 #1 Wed Feb 14 22:15:15 EST 2001 i686 unknown

FreeBSD 4.2:
FreeBSD mesquite.lan 4.2-RELEASE FreeBSD 4.2-RELEASE #2: Sun Mar 11 11:18:15 MST 2001     root@mesquite.lan:/usr/src/sys/compile/MESQUITE  i386

NetBSD 1.4:
NetBSD mesquite 1.4 NetBSD 1.4 (GENERIC) #0: Fri May  7 12:27:31 PDT 1999     perry@cynic.cynic.net:/usr/src/sys/arch/i386/compile/GENERIC i386

NetBSD 1.5:
NetBSD mesquite.lan 1.5 NetBSD 1.5 (GENERIC) #1: Sun Nov 19 21:42:11 MET 2000     fvdl@sushi:/work/trees/netbsd-1-5/sys/arch/i386/compile/GENERIC i386

Unixware 7:
UnixWare mesquite 5 7.1.1 i386 x86at SCO UNIX_SVR5

Solaris 8:
SunOS mesquite 5.8 Generic_108529-03 i86pc i386 i86pc

Additional notes and observations
---------------------------------

With the exception of FreeBSD 4.2, I'm using the stock kernel from the
install of each OS.  I have not done any system tuning.  (The FreeBSD
kernel was recompiled in order to get ext2fs support.)

The "Real #1" numbers should be taken with a grain of salt.  The
machine in question is a multi-boot machine and I had to copy over the
test data to the partition where the OS resides.  It is well known
that different portions (usually inner tracks vs. outer tracks) of a
disk perform differently.  (My understanding is that the outer tracks
give better performance and that the difference in performance between
the inner and outer tracks can be quite dramatic.  See
www.storagereview.com for more info.)

The important number for this test is the "Real #2" value.  The above
tests shows that Linux, FreeBSD, Unixware 7, and Solaris 8 all use
memory to cache filesystem data.  Neither of the NetBSD releases that
I tested appear to use memory as a cache since the wall clock
performance for the second test was virtually identical to the first. 
Also, FreeBSD doesn't do as good a job as one might expect, though it
does do slightly better when using it's own native filesystem (UFS). 
It's interesting to note that, in the first test, FreeBSD completed
roughly 9.5 seconds faster when using an ext2 filesystem as opposed to
a UFS filesystem.  (But again, see the caveat in the previous
paragraph; this number could be entirely due to the location where
the data resides on disk.)

I repeated some of the more surprising tests (in particular, the Linux
and FreeBSD tests) more than once and obtained virtually identical
results in the subsequent trials.  I should also note that each trial
(consisting of the two back to back tests) was run immediately after a
fresh boot of the OS under consideration.  The reason for doing this
was to make sure that as much of the machine's memory as possible was
free to use for caching file data.

Comments?

Kevin