that was the conclusion I came too. Regarding calling buffalo, the phone wait time is usually estimated to 8 hours.
Thanks for the tips.
Eric

On Tue, Nov 17, 2009 at 1:23 PM, Lisa Kachold <lisakachold@obnosis.com> wrote:


On Mon, Nov 16, 2009 at 10:10 PM, Eric Cope <eric.cope@gmail.com> wrote:
We think the drives are good. We want to mount the four drives to extract the data (it was RAID 5). We only have 1 SATA2USB adapter. If I am able to read the whole paritition (using dd), how can I mount 4 iso's using RAID to get the data?

On Mon, Nov 16, 2009 at 7:20 PM, Lisa Kachold <lisakachold@obnosis.com> wrote:
Hi ERIC

On Mon, Nov 16, 2009 at 6:19 PM, Eric Cope <eric.cope@gmail.com> wrote:

some light googling revealed buffalo as a UFS partition over four drives. Has anyone recovered a software RAID 5 configuration? Can I dd to an iso, mount the iso's, mount the raid, extract the data, and move forward?

You might have a different super block on the disk from Buffalo.   You might check your exact model (Is this a LinkStation) against the HowTo's from Buffalo?

Knoppix works really well for restoring disks (every systems admin needs a knoppix USB flash) 

Page 218 dd_rescue  (in case there are bad blocks):
http://www.scribd.com/doc/15490322/Knoppix-Hacks-by-OReilly-Media

Buffalo's Wiki shows a procedure on how to restore data to a LinkStation using Knoppix:
http://buffalo.nas-central.org/wiki/Upgrade_%28or_replace%29_the_existing_LinkStation_hard_drive

And:
http://buffalo.nas-central.org/wiki/Upgrade_%28or_replace%29_the_existing_LinkStation_hard_drive#Backup_your_entire_hard_drive.2C_partition_a_blank_drive_on_a_Knoppix_workstation

Full Knoppix Hacks (ripped to Scribd):
http://www.scribd.com/doc/15490322/Knoppix-Hacks-by-OReilly-Media



 
Hi Eric

According to Buffalo's wiki:

With RAID5 the data is distributed accross all disks and one disk stores an XOR checksum of the remaining (n-1) disks. So you can loose 1 disk.

In theory you should be able to connect all disks to a workstation, and repair it there. In practice this does not work, as the byte order of a recular workstation (i386) is different and the MD driver is not endian agnostic. An Apple Mac is required for this.

See Moving disks, or Terastation crash after power failure.

xfs_repair on the Terastation worked for me. See reb's post : Raid 5 '4 red lights' issue

Reference: http://buffalo.nas-central.org/wiki/Terastation_Data_Recovery#RAID5

For superblock identification and data repair on one disk see http://buffalo.nas-central.org/wiki/Moving_disks

Buffalo uses Linux for RAID 5 on a Terastation (what is your model - that's important), but the data is distributed across the drives. 

The easiest would be to purchase a replacement Buffalo, use it to restore your data (being careful to place your system in exactly the same configuration, marking each drive as you take it out) and when you have completed your task, and determine you no longer need it, return it.
You will pay a restocking fee, at most.

This ufsexplorer (tool) link might assist you to see the drive structure:
http://www.ufsexplorer.com/inf_terastation.php

And don't ask us, CALL or email Buffalo and look at their wiki!


--
Skype: (623)239-3392
AT&T: (503)754-4452
www.it-clowns.com












---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss



--

---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss



--
Skype: (623)239-3392
AT&T: (503)754-4452
www.it-clowns.com












---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss



--
Eric Cope
http://cope-et-al.com