IDE vs SCSI drives

Kevin Buettner kev@primenet.com
Wed, 18 Oct 2000 09:51:26 -0700


On Oct 18,  9:11am, sinck@ugive.com wrote:

> \_ [root@saguaro kev]# /sbin/hdparm -tT /dev/sda /dev/hde /dev/md0
> \_ 
> \_ /dev/sda:
> \_  Timing buffer-cache reads:   128 MB in  1.02 seconds =125.49 MB/sec
> \_  Timing buffered disk reads:  64 MB in  5.05 seconds = 12.67 MB/sec
> \_ 
> \_ /dev/hde:
> \_  Timing buffer-cache reads:   128 MB in  1.03 seconds =124.27 MB/sec
> \_  Timing buffered disk reads:  64 MB in  2.64 seconds = 24.24 MB/sec
> \_ 
> \_ /dev/md0:
> \_  Timing buffer-cache reads:   128 MB in  0.90 seconds =142.22 MB/sec
> \_  Timing buffered disk reads:  64 MB in  2.69 seconds = 23.79 MB/sec
> \_ 
> \_ (I've run this test a number of times and there's not much variation
> \_ in the results.)
> \_ 
> \_ According to the hdparm manpage, the buffer-cache read numbers are
> \_ "essentially an indication of the throughput of the processor, cache,
> \_ and memory of the system under test."  I don't know why the number was
> \_ so much higher for the md device.  (This is the RAID-1 device which
> \_ represents the mirrored /dev/hde5 and /dev/hdg5 partitions.)
> 
> IIRC, the RAID howto mentions that READs can be faster on a RAID1
> because it just has to wait for the first disk to respond.  So,
> depending on magic hardware differences, you'll get slightly better
> performance from that.

That doesn't explain why reading out of kernel's buffer cache is
faster though.  Also, this is what the HOWTO says about performance
for RAID-1:

    Write performance is the slightly worse than on a single device,
    because identical copies of the data written must be sent to every
    disk in the array.  Read performance is usually pretty bad because
    of an oversimplified read-balancing strategy in the RAID code. 
    However, there has been implemented a much improved read-balancing
    strategy, which might be available for the Linux-2.2 RAID patches
    (ask on the linux-kernel list), and which will most likely be in
    the standard 2.4 kernel RAID support.

I'm running 2.4.0-test9, BTW.

It seems to me that large reads from mirrored disks could be up to
twice as fast with RAID-1 since you do have redundant data and could
request one suitably large block from one disk and the next from the
other.  This definitely does not account for my hdparm measurements
between SCSI and IDE though.

> \_ array.  (I should try to find a way to write to just one disk to see
> \_ how much performance the RAID-1 is costing me.)
> 
> De RAID one of the partitions...

Actually, I have some spare partitions on these disks that I can try it
on...

> \_ (scsi0:0:6:0) Synchronous at 80.0 Mbyte/sec, offset 31.
> \_   Vendor: QUANTUM   Model: QM318000TD-SW     Rev: N491
> \_   Type:   Direct-Access                      ANSI SCSI revision: 02
> \_ 
> \_ Note the 80.0 Mbyte/sec line.  
> 
> Perhaps the synchronous part?  It's been a while, but isn't there an
> async mode?  Would that be wise?

Thanks for pointing this out.  I'll check...

> \_ Comments?  In particular, I'd like those SCSI advocates to speak up
> \_ and let me know what I'm doing wrong with my SCSI drive.  (I'd hate to
> \_ think that I've been paying more money all of these years for less
> \_ performance.)
> 
> Could it be that your SCSI drive itself isn't pumping at full rate?
> Like a 5400 RPM drive is prolly going to be slower than a 7200 or 10k
> drive...?

I should have mentioned this before.  The SCSI drive is *supposed* to
be the faster drive.  It's a 7200 RPM drive whilst the IDE drives are
only 5400 RPM drives.  When I bought these drives, I *really* did
not expect them to perform better than my SCSI drive.

> I also think part of the advantage of SCSI is multiple devices on the
> same controller, not a 1-1 pairing.

I don't understand what you mean by the 1-1 pairing bit.  At the moment,
there's only one disk on the SCSI bus.  It's the IDE drives which are
mirrored.

Kevin