I recently installed a pair of 45GB IDE drives on one of my machines.
I'm using the RAID-1 support that linux provides to mirror some of the
partitions between the drives.
Over the weekend, I played around with the hdparm utility. I'd never
looked closely at it before because I'd never owned a drive that I
could use it on. For those of you who don't know, it is used to tune
the performance of an IDE drive on Linux. I did play around with
tuning my IDE drives with this utility, but did not succeed in
improving their performance. (I attribute this to the fact that I've
enabled CONFIG_IDEDMA_PCI_AUTO in the kernel; this is supposed to
detect whether or not the devices in question can do DMA and use it if
they can. Apparently, it is working very well.)
In addition to tuning, hdparm provides a rudimentary bencharking
facility so you can know if your tuning has any affect. But it can
also be used to compare the performance of SCSI and IDE drives.
(David Sinck recently sent a message to the PLUG list which inspired
me to try this.)
I used hdparm to test the drives on my dual PIII 550 box. This
machine has both Ultra2 SCSI (supposedly capable of 80MB/sec) and a
pair of IDE drives driven by a Promise Ultra66 IDE controller card
(supposedly capable of 66MB/sec). For the IDE drives, each disk is
driven by a separate controller for both performance and reliability.
Also, there is only one disk per controller (as recommeded by the
software RAID HOWTO). Here are the results:
[root@saguaro kev]# /sbin/hdparm -tT /dev/sda /dev/hde /dev/md0
/dev/sda:
Timing buffer-cache reads: 128 MB in 1.02 seconds =125.49 MB/sec
Timing buffered disk reads: 64 MB in 5.05 seconds = 12.67 MB/sec
/dev/hde:
Timing buffer-cache reads: 128 MB in 1.03 seconds =124.27 MB/sec
Timing buffered disk reads: 64 MB in 2.64 seconds = 24.24 MB/sec
/dev/md0:
Timing buffer-cache reads: 128 MB in 0.90 seconds =142.22 MB/sec
Timing buffered disk reads: 64 MB in 2.69 seconds = 23.79 MB/sec
(I've run this test a number of times and there's not much variation
in the results.)
According to the hdparm manpage, the buffer-cache read numbers are
"essentially an indication of the throughput of the processor, cache,
and memory of the system under test." I don't know why the number was
so much higher for the md device. (This is the RAID-1 device which
represents the mirrored /dev/hde5 and /dev/hdg5 partitions.)
Anyway, the interesting thing about the above numbers is that they
show that the IDE drives are nearly twice as fast as the SCSI drive.
When using the machine, it feels that way too...
Here's another test. I created a 64MB file with
saguaro:tst-scsi$ dd if=/dev/urandom of=foo bs=64k count=1024
saguaro:tst-scsi$ ls -l foo
-rw-r--r-- 1 kev 204 67108864 Oct 17 22:25 foo
Now we time a copy and a sync; I have 1GB of memory on this machine
so we need to make sure that the data actually gets written to disk.
I do a sync first to be (reasonably) sure that I'm only measuring the
time it takes to write the file in question to disk. tst-scsi is a
directory on my SCSI disk and tst-ide is a directory on my RAID-1
disk...
saguaro:tst-scsi$ df .
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/sda7 14356348 12047936 1561792 89% /home
saguaro:tst-scsi$ cd -
/saguaro1/tst-ide
saguaro:tst-ide$ df .
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/md1 17408068 6653804 9869976 41% /saguaro1
saguaro:tst-ide$ cd -
/home/kev/tst-scsi
Okay, now for the timings. First SCSI...
saguaro:tst-scsi$ rm -f bar; sync; time cp foo bar; time sync
real 0m1.075s
user 0m0.020s
sys 0m1.020s
real 0m6.666s
user 0m0.000s
sys 0m0.560s
saguaro:tst-scsi$ rm -f bar; sync; time cp foo bar; time sync
real 0m1.017s
user 0m0.000s
sys 0m1.020s
real 0m6.426s
user 0m0.000s
sys 0m1.420s
And now IDE (which is actually two IDE drives in parallel, so to
speak, due to the fact that we're doing RAID-1)...
saguaro:tst-ide$ rm -f bar; sync; time cp foo bar; time sync
real 0m1.011s
user 0m0.020s
sys 0m0.860s
real 0m3.204s
user 0m0.000s
sys 0m0.810s
saguaro:tst-ide$ rm -f bar; sync; time cp foo bar; time sync
real 0m0.812s
user 0m0.000s
sys 0m0.820s
real 0m3.737s
user 0m0.000s
sys 0m0.790s
The SCSI disk averaged 7.592 seconds to do the copy. The IDE disk(s)
averaged 4.3820 seconds. Remember that for the latter operation, the
OS has to write to *both* disks which form the mirror. This comes out
to 8.43 MB/sec for my SCSI disk and 14.61 MB/sec for the IDE disk
array. (I should try to find a way to write to just one disk to see
how much performance the RAID-1 is costing me.)
There is also a read operation, but I don't think it's hitting any of
the disks. The 64MB test files easily fit in the disk cache of this
machine so we're essentially reading out of memory. Thus the above
test is really a measure of how fast we can copy a file out of cache
(i.e, memory) and write it to disk.
At this point, you're probably thinking my SCSI disk is misconfigured;
I've wondered about this myself. (And am still wondering...) However,
in the boot messages (from /var/log/messages), I see:
SCSI subsystem driver Revision: 1.00
(scsi0) <Adaptec AIC-7890/1 Ultra2 SCSI host adapter> found at PCI 0/14/0
(scsi0) Wide Channel, SCSI ID=7, 32/255 SCBs
(scsi0) Downloading sequencer code... 392 instructions downloaded
scsi0 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 5.2.1/5.2.0
<Adaptec AIC-7890/1 Ultra2 SCSI host adapter>
Vendor: HP Model: C6270A Rev: 3846
Type: Processor ANSI SCSI revision: 02
(scsi0:0:6:0) Synchronous at 80.0 Mbyte/sec, offset 31.
Vendor: QUANTUM Model: QM318000TD-SW Rev: N491
Type: Direct-Access ANSI SCSI revision: 02
Note the 80.0 Mbyte/sec line. I suppose it's possible that something
is still screwed up, but if so, it's also screwed up for my other pure
SCSI machine from Dell because I'm seeing similar results (slightly
better, but not by much) there as well.
Comments? In particular, I'd like those SCSI advocates to speak up
and let me know what I'm doing wrong with my SCSI drive. (I'd hate to
think that I've been paying more money all of these years for less
performance.)