Brian has good advice here.. but if it wasn't obvious you'll need a Linux system up and running
where you can mount the disk on its own. I'd also suggest scanning /dev for partitions
on your disk.

If your 3TB disk is /dev/sdb, look for /dev/sdb1 /dev/sdb2 etc. If those don't show then
the RAID metadata may be occupying the first parts of the disk or the partition table
could be corrupted.

I'm mostly a FreeBSD guy and I'd suggest using scan_ffs to recover the slice table but
I think gpart can also do this on Linux.

http://www.brzitwa.de/mb/gpart/

It can scan the disk for partition signatures and help you rebuild the table. I'd work on one
disk at a time so if something goes wrong you don't jeopardize your data.

If you have another system with 3TB+ of data the safest way to play with it is to set it up
as a loop device. Something such as:

dd if=/dev/sdb of=data.img bs=128k
losetup /dev/loop0 data.img

You can mess with /dev/loop0 all you want and not change any data
on your disks should you end up deciding to send it to a commercial
recovery company.


On Sun, Feb 2, 2014 at 10:47 AM, Brian Cluff <brian@snaptek.com> wrote:
If it's a RAID 1 you shouldn't need to assemble it to get your data, just mount the raid partition directly read only and copy your data off to somewhere else.

You should be able to do something like:

mount --read-only /dev/sdb1 /mnt
or if the above one doesn't work:
mount --read-only /dev/sdc1 /mnt

The other possibility you could try sounds terrifying but it works... Just create a new array:
mdadm --create /dev/md0 -n2 -l1 /dev/sdb1 /dev/sdc1

When you create an array all it does, for the most part, is write a superblock at the end of the partition so that it can later identify the associated drives and be able to automatically put them back together. The data area itself it unaffected, so it should be safe to just create a new array (just don't mkfs it afterwards).  Creating a new array will change the RAID's UUID and such, so you won't be able to just put it back into service without first creating a new mdadm.conf and running mkinitrd but otherwise it should just mount up and go... as long as the data isn't completely corrupted.

Tripple check that the partitions are absolutely correct or it will destroy your data when it starts to resync the array upon creation.

You could also give yourself 2 chances to get your data back and make 2 RAID1 arrays out of your 2 raid drives by doing this:
mdadm --create /dev/md0 -n2 -l1 /dev/sdb1 missing
mdadm --create /dev/md1 -n2 -l1 /dev/sdc1 missing

That will give you /dev/md0 and /dev/md1 which you could then mount up and hopefully copy all your data off.

I hope this helps,
Brian Cluff

On 02/02/2014 09:25 AM, George Toft wrote:
I've spent over 15 hours on this (google . . . head . . .desk . . .
repeat).

I need to recover the data off of one of these hard drives.

Background
Two 3TB hard drives in a Raid 1 mirror, working fine for months. OS:
Centos 6.5
Woke up a few days ago to a dead system - looks like motherboard
failed.  And when it failed, it appears to have corrupted the RAID
partition (supposition - see problems below).  I moved the drives to
another system and it will boot then the kernel panics.

Partitions
part 1 - /boot
part 2 - swap
part 3 - RAID

I think the RAID partition has just one filesystem (/).


What I've done:
Rescue mode: Boots, unable to assemble raid set:

# fdisk -l | egrep "GPT|dev"
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
fdisk doesn't support GPT. Use GNU Parted.

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sda: 80.0 GB, 80025280000
Disk /dev/sdb: 3000.6 GB, 3000591900160 bytes
/dev/sdb1                          1           267350  2147483647+ ee  GPT
Disk /dev/sdc: 3000.6 GB, 3000591900160 bytes
/dev/sdc1                          1           267350  2147483647+ ee  GPT

# mdadm --assemble --run /dev/md0 /dev/sdb
mdadm: Cannot assemble mbr metadata on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted

# mdadm --assemble --run /dev/md0 /dev/sdb1
mdadm: cannot open device /dev/sdb1: No such file or directory
mdadm: /dev/sdb has no superblock - assembly aborted


parted tells me I've found a bug and gives me directions to report it.

-----------

Booted Knoppix and ran disktest.  I can copy the RAID partition to
another drive as a disk image and I end up with image.dd.  When I try to
build an array out of it, I get an error: Not a block device.

Tried commercial RAID recovery software (Disk Internals) - it hung after
identifying 2.445 million files.


-------------

Ideas on what to do next?

Is anyone here up for a challenge?  Anyone need beer money? I need the
data recovered, and will pay :)

All help is appreciated :)


---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.phxlinux.org
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss



--
- Brian