Matt, Eric thanks for your thoughts. Sorry for the confusion my mdadm code reference was wrong the device names actually updated from first install to reboot. The boot drive is the smaller 500GB HDD (not a part of the raid). Ubuntu renamed it to sde from sda after reboot. I found an ARRAY device call for md3 in the mdadm.conf file. Once I renamed the mdadm.conf file and rebooted, a cat /proc/mdstat showed no raid devices active. I then zeroed the drives using dd and started over. I built a new raid10 with the device name md4 (just to be sure). So far the mdstat recognizes the array and --examine shows no errors in the array.
Thanks again for you thoughts.
From: Eric Shubert <ejs@shubes.net>
> On 10/19/2011 01:06 AM, James Dugger wrote:
>> fdisk -l gives the following:
>> /dev/sda1 1 121601 976760001 83 Linux
>> /dev/sdb1 1 121601 976760001 83 Linux
>> /dev/sdc1 1 121601 976760001 83 Linux
>> /dev/sdd1 1 121601 976760001 83 Linux
>> /dev/sde1 * 1 32 248832 83 Linux
>> /dev/sde2 32 60802 488134657 5 ExtendedSo sde contains /boot and / , while you'd like to have sda..sdd contain the
>> /dev/sde5 32 60802 488134656 8e Linux LVM
RAID. This should work, but for some reason mdadm says:I'm very surprised anything's working at all if it's trying to use sde1 as a
>> 0 0 8 49 0 active sync /dev/sdd1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 17 2 active sync /dev/sdb1
>> 3 3 8 65 3 active sync /dev/sde1
component of the RAID.
md devices can contain partition tables if you really want them to. Some
>> Notice the md3 device at the bottom of the fdisk print out.
people do this; I wouldn't.
Hm. I thought mdadm went by UUIDs within the RAID superblocks, not partition
> Looks like from the --examine that the device assignments (/dev/sd?)
> have moved around since the array was created (sda belongs to an array
> consisting of d,c,b,e).
names.
This may not work properly if mdadm has the superblocks tangled up. You could
> Have a look at:
> # ls -l /dev/disk/by_id
> and it'll show which drives are assigned to which /dev/sd? letter.
>
> Then (w/out rebooting) take another crack at clearing things out, with:
> # mdadm --zero-superblock ...
> Then re-create/build the array.
zorch the superblocks yourself with dd, something like "dd if=/dev/zero
of=/dev/sda bs=1M seek=1013760" , repeated for each of the disks. (That's
"seek 990G out and start writing zeroes", it'll take a lot less time than
dding /dev/zero over the entire 1T disk.) Then stop md3 (if possible without
rebooting), then recreate it, using the right options for RAID10 and the right
disk names. I'd change the partition types of the softRAID components to 0xfd
too, just because that makes it a little clearer as to what's going on, but
that may be old-school or deprecated now or something.
Have a rescue CD handy if your /boot has been eaten by this mdadm
misadventure. That's workaroundable, just not usually all that fun.
--
Matt G / Dances With Crows
The Crow202 Blog: http://crow202.org/wordpress/
There is no Darkness in Eternity/But only Light too dim for us to see
---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss