I have built and maintained linux production servers under both hardware RAID 5 on HP MSA's,
and hardware RAID 1+0 on both HP Proliant's and Dell 2950's/1950's (with a variety of software disk management [depending on the server farm standards]).
  
I have also configured LVM and md under SATA or iSCSI systems in both RAID 5 and RAID 10.

[I have also built Solaris servers under RAID 5/RAID 10 using SVM, and later replaced with N1 over multi-path I/O on Sun 2450's (zfs); and worked with NetApps/Redhat (xfs).]

I am a proponent of md/LVM over hardware RAID because Linux md does not handle bad block relocation; I love simply rebuilding the array, especially since drives heat up, get torged by power, and are simply not build with QA these days.

Since disk is so incredibly cheap these days (by comparison in your lower level [non enterprise] solution), the popularity of RAID 5 is offset by the full mirror and rebuild protection of RAID 1+0. 

Here's the technical descriptions of each level:



RAID 1+0 combines striping with duplication as striped arrays of mirrored arrays which gives very high transfers combined with fast seeks as well as redundancy. The disadvantage is high disk consumption as well as the above mentioned complexity.


When you are scrounging disk space against money you are sure that two of your disks are not going down.  But believe me, THEY DO, and that's it!   Also RAID 1+0 out performs in a fast good way.  What use is a huge slow system?

Full discussion available here that includes a complete analysis of md systems, and all the types of data loss that generally occur.


www.Obnosis.com |  http://wiki.obnosis.com | http://hackfest.obnosis.com (503)754-4452
PLUG HACKFESTS - http://uat.edu Second Saturday of Each Month Noon - 3PM


Date: Mon, 19 Jan 2009 07:30:08 -0700
From: mark@phillipsmarketing.biz
To: plug-discuss@lists.plug.phoenix.az.us
Subject: Re: Looking For RAID Hardware/Software Advice

Eric
Thanks for the summary, and thank-you to everyone for their ideas.
Based on NewEgg prices, here is some more information:

Option A
Single Disk IDE Drive - 500 GB and backups, keep OS on existing drive = $69.99
Use existing controller and just add another drive. No redundancy

Option B
RAID10 with 500 GB backup capacity and redundancy, keep OS on existing drive = $179.97
2 500 GB SATA2 Drives, new SATA2 controller

Option C
RAID10 with 750 GB backup capacity and redundancy, keep OS on existing drive = $239.97
Two 750 GB SATA2 Drives, new SATA2 controller

Option D
RAID5 with 1,000 GB backup capacity and redundancy, keep OS on existing drive = $239.97
Three 500 GB SATA2 Drives, new SATA2 controller

I am leaning towards Option C based on less power consumption with fewer drives. However, I have to rethink my budget...
After some more reading, I am a little confused about the debate between RAID5 and RIAD10. I am interested in the group's opinions on which is better - RAID 5 or RAID 10 and why? What experiences have you had regarding installation, maintenance, and fixing problems? I am running Debian testing.
Thanks!
Mark
On Sat, Jan 17, 2009 at 6:35 AM, Eric Shubert <ejs@shubes.net> wrote:
Mark Phillips wrote:
> I am running out of room for my backups. I use backuppc and I have
> almost filled a 150GB drive with backups from 7 computers, and I need to
> add another 2 computers to the set. I have an old Dell Poweredge 1300
> server (Pentium III 550 Mhz, 500 MB RAM, PCI 33.3Mhz) that I could turn
> into a backup server. I am looking for suggestions/thoughts on how to
> set this up. I need to keep the cost down as much as possible; under $150.
>
> My initial thoughts:
>
> * Keep current 72 GB drive for OS (debian testing, about 68% full)
> * Add two 500 GB SATA drives and a PCI SATA controller ~$130
> * Software RAID and LVM for the two drives
> * Move current 150 GB of backups to the RAID
> * Backuppc now runs on this machine and slowly fills up the RAID
>
> My questions:
>
> 1. Should I keep the 72 GB drive for OS, or put it on the RAID?
>
> 2. I can add another CPU (P III 550 MHz) processor to the box - is it
> worth the effort to find one? I found one source for $5/CPU, I just need
> to find the heat sink and mounting hardware. Will this improve performance?
>
> 3. The box has a built-in SCSI 68-pin Ultra2/wide bus/controller, but
> SCSI drives are more expensive, at least from a cursory google search.
> Is this correct? I don't think I can use SCSI drives within my budget
> constraint.
>
> 4. Would upgrading the memory to 1GB improve performance - top shows:
> Mem: 646676k total,      639300k used     7376k free,      64548k buffers
> This would add another ~$60 to my cost.
>
> 5. Should I look at hardware RAID cards - they seem very cheap, so
> perhaps software is better?
>
> 4. Does this plan make sense, or is there a better way to proceed for
> about the same cost?
>
> Thanks!
>
> Mark
>

Good replies, all. To sum things up, I think a SATAII PCI card (2 or 4
port) and 2 drives is all the HW you need to add to the backup box you
currently have. Set up the drives with SW RAID-1 (mirrored) and you're
good to go. Migrate the data to the raid device, and keep the OS on the
existing drive.

With KeepItSimpleStupid in mind, I recommend using RAID-1 as opposed to
RAID-5. With the price of drives these days, the additional space you
get with RAID-5 isn't worth the headache you'll get when there's a
problem. With RAID-1, each drive can be mounted (and used) individually
if necessary. Not so with RAID-5.

--
-Eric 'shubes'

---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss



Windows Live™: Keep your life in sync. Check it out.