On Mon, 2005-01-17 at 21:57 -0700, brien dieterle wrote: > > > > I need your opinion on Linux software RAID. I mean its performace > > (in > > terms of drive read/write time). I've configured Software RAID on > > two > > SATA drives. I figured out the onboard SATA raid controller that > > this > > server has is a fakeraid controller (Works only with the > > manufacturer > > drivers on M$ Windoze). So, resorted to Linux software RAID. I've > > had > > bad experience in the past with software RAID with 2.4 kernel based > > Linux server. So, I'd like to know if anyone here has experiences > > with > > software RAID on 2.6 kernel (default FC3 kernel). > > > > And by accident, I created three RAID devices (namely md0, md1 and > > md2 > > for /boot, / and swap respectively). Do I need to create RAID for > > swap > > parition. Now, I'm worried that it's going to affect the > > performance. As > > I write, it's still going thru FC3 installation process. Probably, > > I'd > > perform some harddrive read/write access tests after installation. > > I'd > > really appreciate if you can share your opinions and experiences > > with > > software RAID in Linux 2.6 kernel (to be specific 2.6.9-smp). > > > > Thanks, > > Sanjay. > > > > > I've been running software raid5 on / (with 3 ide drives, not SATA), > for many months now with no problems whatsoever. In fact, I moved the > array to a new machine and rearranged the drives onto different > controllers without a hiccup. > > Swap on raid is probably recommended (if the swap device croaks while > the system is using it bad things can happen). But, you are more than > welcome to add the partitions in fstab and the kernel will stripe to > all of them automatically, so no, raid is not required. > > Your raid1 (you didn't make raid0, did you?) device will probably be > slightly slower than a single disk for reads, while a bit slower still > for writes. I've done some benchmarking of several software and one > hardware raid systems, with less than amazing results. ---- I have only used Linux software raid thus far on 2.4 kernels (I don't consider the 2.6 stuff 'stable' yet enough for server use). I have never had a problem with software raid but I would prefer hardware raid where available - btw - Dell sells their 1800 PowerEdge with a 'CERC' SATA raid controller which is supported by RHEL-3-U3 (aacraid). Raid 1 (mirror) disk writes are gonna be slower than single disk, even with hardware raid. There is gonna be overhead and bus activity. Whether you put swap on RAID depends upon what you want to achieve. It would make sense to me to make swap memory a RAID 0 (stripe) between two identically sized partitions on opposite drives to speed read/write activity - but know that if you lose one of the drives - even before you add another drive and attempt to rebuild the mirror, you will have to deal with the problem of the missing swap. Raid 1 is not about speed - it is about fault tolerance Raid 5 (3 drives minimum) gives you fault tolerance of Raid 1 and some of the lost speed is returned because of striping (Raid 0). Doing this on SATA - motherboard SATA is somewhat of a low performance system at present. The good things about SATA is decent speed, good megabytes per dollar ratio. The bad thing is that most of these SATA drives are not designed for the continuous use like the SCSI-320 drives are. Rather than belabor this, you might want to read this article I was reading in Network Magazine Formatted to html to keep long line from wrapping Craig