If I may suggest, we should start a page on the wiki with hardware
configurations people have successfully deployed on. This is a very
daunting area for someone who is contemplating an installation.

There is already a few hints on the wiki about NC State's hardware
(but perhaps a bit more detail would be useful). It would be good if
more people listed their diverse configurations in detail. Right now I
have an installation on a modest desktop system (mostly to cement my
understanding after last summer's bootcamp). I can put that up. I will
also be doing a real deployment soon and can put that up too as I have
completed it.


On Fri, Apr 27, 2012 at 8:42 AM, Aaron Peeler <fapee...@ncsu.edu> wrote:
> Hi Emir,
> I'll try to answer, but hopefully Andy will chime in to confirm.
> In the upcoming release, the copying vmdk for long-term reservations
> has been fixed. It's using snapshots to achieve. Resulting in faster
> boot time.
> On the vms per host question. This is very good question. So far your
> 100 vms per host is the highest I've heard about. As your aware, the
> number of vms and end-user performance is going to depend on the
> underlying hardware (host mem&CPU, network, and storage).
> It would be good as a community for us to share hardware
> recommendations on what is working well at their own site.  We have a
> mix of hardware at NCSU, I'll write up some details and send that out
> in a separate thread soon.
> Aaron
> On Thu, Apr 26, 2012 at 3:56 PM, Emir Imamagic <eimam...@srce.hr> wrote:
>> Hello,
>> We've noticed that in case of long term reservations VCL copies the virtual
>> disk of image on datastore. In case of images with many subimages (>20) we
>> experienced problems with this copying. VCL would initiate multiple
>> vmkfstools commands, ESXi server would get overloaded and start killing
>> vmkfstools processes (messages were indicating lack of memory). Is there any
>> way to bypass this behavior?
>> Is this copying really needed? Is it possibly to switch it off in a clean
>> manner?
>> Another question is - how many VMs can vcld handle per a single VM host
>> (VMware ESXi 4.1)?
>> On our setup we managed to start 100 VMs on a single VMware host and it was
>> still working fine. VM host has 24 cores and 256GB RAM.
>> Thanks in advance
>> --
>> Emir Imamagic
>> SRCE - University of Zagreb University Computing Centre, www.srce.unizg.hr
>> emir.imama...@srce.hr, tel: +385 1 616 5809, fax: +385 1 616 5559
> --
> Aaron Peeler
> Program Manager
> Virtual Computing Lab
> NC State University
> All electronic mail messages in connection with State business which
> are sent to or received by this account are subject to the NC Public
> Records Law and may be disclosed to third parties.

Mark Gardner

Reply via email to