"RAID" for remote filesystems
Matt Alexander
lowbassman at gmail.com
Mon Oct 10 17:18:13 MST 2005
I've basically accomplished what I wanted by using the loop device. Here's a
brief tutorial:
I have 4 remote filesystems mounted. One is using sshfs, one is using Samba
from a Windows server, and 2 are using NFS from two different NetApp filers.
I then create a 1G file on each mount point and also one on my local file
system...
dd if=/dev/zero of=/sshfs/matt0 bs=1M count=1024
dd if=/dev/zero of=/samba/matt1 bs=1M count=1024
dd if=/dev/zero of=/nfs1/matt2 bs=1M count=1024
dd if=/dev/zero of=/nfs2/matt3 bs=1M count=1024
dd if=/dev/zero of=/home/matt/matt4 bs=1M count=1024
I then create a loop device for each file...
losetup /dev/loop0 /sshfs/matt0
losetup /dev/loop1 /samba/matt1
losetup /dev/loop2 /nfs1/matt2
losetup /dev/loop3 /nfs2/matt3
losetup /dev/loop4 /home/matt/matt4
I then create a RAID5 array from the loop devices...
mdadm --create /dev/md0 --level=5 --raid-devices=5 /dev/loop0 /dev/loop1
/dev/loop2 /dev/loop3 /dev/loop4
I then create a filesystem and mount it...
mkfs.ext3 /dev/md0
mount /dev/md0 /mnt/matt
The performance actually isn't that bad. Copying a 2G file to /mnt/matt took
about 6 minutes.
My next step is to try it over the 'net using gmailfs to create my giant
filesystem in the sky.
~M
On 10/10/05, Dan Lund <situationalawareness at gmail.com> wrote:
>
> The RAID subsystem takes care of all of the backend issues like
> latency to disk and so forth.
> nbd is merely a block device, and from my experience if a disk even
> has a single timeout, the disk will be written as faulty by the
> subsystem and be set to [_].
> raid6 is: http://www.synetic.net/Tech-Support/Education/RAID6.htm
> Essentially it's data and parity striped across the array, and that
> itself has a parity.
>
> You can lose multiple disks with a raid6 setup and not lose data, I
> use it and I absolutely love it. (mine is raid6+1)
>
> On 10/10/05, Joseph Sinclair <plug-discuss at stcaz.net > wrote:
> > two questions:
> > 1) How does nbd deal with the differential latency issue? If latency
> differs by too much a RAID system will end up with stripes on different
> "disks" out of order, and things get REALLY messed up at that point.
> > 2) What is RAID 6?
> >
> > Dan Lund wrote:
> > > I've done work like this with the network block device as an
> > > experiment in several different ways.
> > > To put it in a nutshell I had a machine exporting a couple of nbd
> > > (network block devices), and I accepted them on another. They showed
> > > up as /dev/nbd/0, /dev/nbd/1, etc.
> > > I then made a raidtab that took them and set them into a RAID5 and had
> > > a hotspare.
> > > I've tested it with RAID1/5/5+1/6/6+1, made it failover, hot-added
> > > "drives", etc.
> > >
> > > It was pretty decent in throughput, and I was about ready to put
> > > together a turnkey solution for my work as an expandable disk
> > > subsystem. (on it's own gig backplane) I made sure it was on it's own
> > > gig backplane because the nbd devices are solely dependant on the
> > > network. If it so much as blips, your disks go away.
> > > RAID, as far as I know, only works on block devices.
> > > You could always check out PVFS, or Coda if your looking for something
> > > on the filesystem layer. I have far more faith in nbd though.......
> > >
> > > --Dan
> > >
> > >
> > > On 10/9/05, Matt Alexander <lowbassman at gmail.com > wrote:
> > >
> > >>I'm wondering if anyone knows if this is possible...
> > >>
> > >> Take multiple remote filesystems such as NFS, gmailfs, Samba, sshfs,
> and
> > >>layer a filesystem over the top to create one namespace. Ideally it
> would
> > >>provide some fault tolerance/redundancy and improved performance by
> using
> > >>the concept of RAID over the multiple connections.
> > >>
> > >> In reality, this new filesystem layer wouldn't care if the
> filesystems are
> > >>remote or not. You could have...
> > >>
> > >> /mynfsmount
> > >> /mygmailfsmount
> > >> /myothergmailfsmount
> > >> /mysshfsmount
> > >>
> > >> ...and then a new mount point of...
> > >>
> > >> /myreallycoolmount
> > >>
> > >> ...and when you put files here, they're striped/mirrored over all the
> > >>previous mounts.
> > >>
> > >> Is this currently possible? If not, then perhaps I'll see if I can
> make it
> > >>happen in my minuscule free time. I know there are a ton of potential
> > >>problems with this, but it'd be a fun project nonetheless.
> > >> Thanks,
> > >> ~M
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.plug.phoenix.az.us/pipermail/plug-discuss/attachments/20051010/984fe7fc/attachment.htm
More information about the PLUG-discuss
mailing list