On Sun, April 21, 2013 17:09, Tanstaafl wrote:
> On 2013-04-21 5:47 AM, J. Roeleveld <[email protected]> wrote:
>> On Sat, April 20, 2013 17:38, Jarry wrote:
>>> Problem of virtualized filesystem is not that it is virtualized,
>>> but that it is located on datastore with more virtual systems,
>>> all of them competing for the same i/o. *That* is the bottleneck.
>>> If you switch reiser for xfs or btrfs, you might win (or loose)
>>> a few %. If you optimize your esxi-datastore design, you might
>>> win much more than what you have ever dreamed of.
>>
>> If the underlying I/O is fast enough with low seek-times and high
>> throughput, that handling multiple VMs using a lot of disk I/O
>> simultaneously isn't a problem. Provided the Host has sufficient
>> resources
>> (think memory and dedicated CPU) to handle it.
>
> My host specs:
>
> Dual AMD Opteron 4180 (6-core, 2.6Ghz)
> 128GB RAM
> 2x internal SSDs in RAID1 for Host OS
> 6x 300G SAS 6Gb 15k hard drives in RAID10 for Guest OSs

Sounds like a nice machine for testing :)

> I allocate each Guest 1 virtual CPU with 2 cores

Do you limit the Guest to use any of 2 specific cores? Or are you giving 2
vCPUs to each Guest?

>> A decent hardware raid-controller with multiple disks running in a
>> higher
>> raid version is cheaper then the same storage capacity in SSDs.
>
> Yep... I toyed with the idea of SSDs, but the cost was considerably more
> as compared to even these SAS drives...

I am planning on using SSDs when getting new desktops, but for servers I
prefer spinning disks. They're higher capacity and cheaper.
For speed, I just put a bunch of them together with hardware raid.

--
Joost


Reply via email to