On Sat, Oct 02, 2010 at 03:53:54PM -0400, Chris Buechler wrote:
> That's not the normal experience from what I've seen, sounds specific
> to something in particular you're doing. I believe every environment
> I've seen that routes between VLANs within ESX handles the VLANs
> entirely at the ESX le
I gave mine a 10GB disk and 512MB ram, and two CPU's on a 4 core ESXi box
(Xeon 3.2). I also ran squid on it. I find that VMware tends to give a
perceived performance hit when you only assign a single core to a VM when
the host is dual core and multi-processor. I'm not sure why this is.
On Thu, Oct 7, 2010 at 3:43 PM, Eugen Leitl wrote:
> On Sat, Oct 02, 2010 at 03:53:54PM -0400, Chris Buechler wrote:
>
>> That's not the normal experience from what I've seen, sounds specific
>> to something in particular you're doing. I believe every environment
>> I've seen that routes between V
If I may add one thought to this,
Chokepoint have recently announced a virtual version of their 'blade' product
which uses the VMSafe API to enable more efficient inspection of traffic
travelling between virtual machines and the outside world.
http://www.networkworld.com/news/2010/090110-chec
As a note on cpu allocations for vnware vm's... this also applies to xen and
hyper-v
Example:
Vmhost: 2 quad core cpu's
vm1 8 vcpu's
vm2 1 vcpu's
vm3 1 vcpu
vm4 1 vcpu
vm5 1vcpu
say vm1 is running something like active directory.. or any single threaded
app. VMware will Que up all 8 vcpu's for