Logs?  Can we digest the details too? 

www.Obnosis.com |  http://wiki.obnosis.com | http://hackfest.obnosis.com (503)754-4452

January PLUG HackFest = Kristy Westphal, AZ Department of Economic Security Forensics @ UAT 1/10/09 12-3PM

Date: Fri, 9 Jan 2009 15:17:12 -0700
From: joe@selectitaly.com
To: plug-discuss@lists.plug.phoenix.az.us
Subject: Re: Softraid Multi-dirve Failure

That's exactly what I want to do here; just pull up one of the drives long enough that I can get the data off it. I suspect one of the drives really did fail, I've been waiting for it to happen in fact. But since the other drive claims to have failed at the EXACT same time, I really don't think that it did.

I saw the --force option but there's no indication that it wasn't going to rebuild the array. The assemble option might simply imply that though.... it does say "This usage assembles one of more raid arrays from pre-existing components" which sounds promising enough.

I think you've described exactly what I was trying to do; assemble (NOT rebuild) and copy. Thanks!

-Joe
I've had luck in the past recovering from a multi-drive failure, where the other failed drive was not truly dead but rather was dropped because of an IO error caused by a thermal calibration or something similar.  The trick is to re-add the drive to the array and using the option to force it NOT to try to rebuild the array.  This used to be an require several options like --really-force and --really-dangerous but now I think its just something like --assemble --force /dev/md0. This forces the array to come back up to its degraded (still down 1 disk) state.  If possible replace the degraded disk or copy your data off before the other flakey drive fails.


---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss


Windows Live™: Keep your life in sync. Check it out.