Re: VMware performance: physical partitions vs images?

Top Page
Attachments:
Message as email
+ (text/plain)
+ bonnie++.html (text/html)
+ (text/plain)
Delete this message
Reply to this message
Author: Kurt Granroth
Date:  
To: Main PLUG discussion list
Subject: Re: VMware performance: physical partitions vs images?
Technomage wrote:
> I am an experienced user of vmware products (both the server and workstation
> lines). using real partitions is definitely gives better performance over
> using "loop back image" partitions.
>
> This situation is most noticeable when using only 128 MB of ram for the vmware
> instance and reserving the other 128 for the host system (which is what I run
> mac OS Darwin under here).. running the guest system "natively" is definitely
> far superior, but failing that, running a vmware guest OS using real hardware
> is 30-50% faster than via loopback.


That's what I would have guessed, too. I just did some random
benchmarks, though, and now I'm not so sure. I ran 'bonnie++',
'dbench', and then did a few things with tar files.

'dbench' measure throughput for multiple clients. Bonnie++ measures
quite a few things. The tar tests each tested different things, which
are noted below. I attached the Bonnie++ HTML output which is easier to
look at.

I ran the tests within VMware on a (pre-allocated) vmware image, a
physical partition, and on then on the Host system on the same physical
partition. The most notable result of the tests is how similar they all
are! I was expecting the physical partition to run away with the tests
but that wasn't the case at all.

dbench: pretty much the same
bonnie++: images are FASTER for writing and roughly the same otherwise
tar: extracting the tar file is as fast on the image as on the host
system and weird results for the raw access. images were notably slower
while creating a tar file made up of 45,000 files. in all other cases,
they were close enough not to notice in "real life"

So all in all, my tests *seem* to indicate that images are far faster
than we all thought!


System
======================================================================
Host: 2.8Ghz P4, SATA WDC WD1600JD-75H UDMA/133, 1GB RAM
      VMware Server 1.01
      SuSE 9.3
      /dev/sda10 -> 12G; WIN32; 11G Pre-allocated VMware image
      /dev/sda11 -> 12G; Reiser; Physical VMware access
Guest: 384MB RAM
       SUSE 10.1
       /dev/sda -> 8G; reiserfs; vmware image
       /dev/sdb -> 11G; reiserfs; image (host: /dev/sda10)
       /dev/sdc -> 12G; reiserfs; raw partition (host: /dev/sda11)


dbench 5
======================================================================
image: 133.9 MB/sec
raw : 130.1 MB/sec
host : 134.2 MB/sec

bonnie++ -u 0:0
======================================================================
image,1G,35118,58,32685,18,11361,9,16324,25,13846,6,75.0,0,16,17522,99,+++++,+++,15117,97,17406,99,+++++,+++,15233,100
raw,1G,21187,55,27171,14,10113,6,16965,23,18265,6,100.1,1,16,17487,99,+++++,+++,15975,99,16970,99,+++++,+++,14689,99
host,2G,25702,31,24746,10,10937,3,17834,17,17309,2,104.5,0,16,22203,95,+++++,+++,19243,99,22791,99,+++++,+++,18199,99

Extract tar file (883MB; 45,386 files)
Purpose: Read large file; write thousands of little files
======================================================================
image: 1m32s
raw : 2m06s
host : 1m29s

Create tar file (883MB; 45,386 files)
Purpose: Read thousands of little files; write one large file
======================================================================
image: 2m25s
raw : 1m42s
host : 1m39s

cat 883MB > /dev/null
Purpose: Read large file by itself
======================================================================
image: 55.5s
raw : 48.8s
host : 29.8s

cat 883MB 883MB > 1.7GB
Purpose: Read large file; write to very large file
======================================================================
image: 2m19s
raw : 2m50s
host : 2m02s


Bonnie++ V1.01d Benchmark results

TD.header {text-align: center; backgroundcolor: "#CCFFFF" }
TD.rowheader {text-align: center; backgroundcolor: "#CCCFFF" }
TD.size {text-align: center; backgroundcolor: "#CCCFFF" }
TD.ksec {text-align: center; fontstyle: italic }




Sequential Output
Sequential Input
RandomSeeks

Sequential Create
Random Create

Size:Chunk SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDeleteK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU
image1G35118583268518113619163242513846675.00161752299++++++++15117971740699++++++++15233100
raw1G211875527171141011361696523182656100.11161748799++++++++15975991697099++++++++1468999
host2G257023124746101093731783417173092104.50162220395++++++++19243992279199++++++++1819999



---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change you mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss