I have a pair of USB drives that I've run as a raid-1 pair for quite a
while (a year or so) attached to a backup server. A couple months ago, I
got the idea of making the backup server virtual, and indeed it's
working. I have a raid-1 pair of external USB drives attached to a
virtual guest, and mounted and accessed by the guest machine.
Now the problem. When the backup server ran on bare iron, I got a pretty
consistent 20M/s throughput (usb2 std) on each hd (easy to see when the
array is resyncing), with fairly low cpu use (15-20% iirc). On the
virtual/guest backup server, I get a pretty consistent 15M/s, with
45%cpu. It used to be worse, like 100% cpu and 12M/s, before I "tuned"
vmware memory and i/o a bit.
While the VM disk drive is using elevator=noop for the virtual drive,
I've tried using the deadline scheduler (what the VM host uses for its
software raid-1 devices) on the usb drives, with no difference.
Does anyone have an idea what might be the bottleneck?
Thanks for any input. I'm about to post to the VMware community, but I
wanted to run it by this group first.
--
-Eric 'shubes'
---------------------------------------------------
PLUG-discuss mailing list -
PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss