Thanks, that does help, but still doesn't reflect the behavior I was seeing.  
I'll go do some more testing to make sure I'm not up in the night :)


On Oct 22, 2012, at 11:22 PM, Prachi Damle <prachi.da...@citrix.com> wrote:

> All of these are heuristics applied by deployment planner and 
> host/storagepool allocators to deice the order in which 
> resource(pods,clusters,hosts,storage pools) will be considered for VM 
> deployment.
> 
> random: This just shuffles the list of clusters/hosts/pools that is returned 
> by the DB lookup. Random does not mean round-robin - So if you are looking 
> for a new host being picked up on every deployment - that may not happen.
> firstfit:  This makes sure that clusters are ordered by available capacity 
> and first hosts/pools having enough capacity is chosen within a cluster.
> userdispersing: For a given account, this makes sure that VM's are dispersed  
> - so clusters/hosts with minimum number of running VM's for that account are 
> chosen first. Storage Pool with minimum number of Ready storage pools for the 
> account is chosen first.
> Userconcentratedpod_random: Always choose the pod/cluster with max. number of 
> VMs for the account - concentrating VM's at one pod. Hosts and StoragePools 
> are chosen randomly.
> Userconcentratedpod_firstfit: Always choose the pod/cluster with max. number 
> of VMs for the account - concentrating VM's at one pod. Hosts and 
> StoragePools are chosen by firstfit.
> 
> Hope this helps.
> 
> -Prachi
> 
> -----Original Message-----
> From: Caleb Call [mailto:calebc...@me.com] 
> Sent: Monday, October 22, 2012 10:00 PM
> To: cloudstack-users@incubator.apache.org
> Subject: Re: About Allocator algorithm of creating VM on Host
> 
> Can anyone give a definition of each of the models?  I have noticed that my 
> VMs always create on a particular node in the cluster even though I have 6 
> other nodes that are identical in specs to that one.  I have tried first fit 
> (makes sense it would do to the same one till it was full), random (I would 
> expect it to NOT always go to the same one) and userdispersing (not sure what 
> to expect with this one, but tried it anyways).  In the logs when it's trying 
> to figure out which node to use, it always finds all the nodes and declares 
> them all fit, but it always puts them on the same node. It seems the 
> algorithms don't work as well as they should.  I did restart after each 
> change and could see it was using the new method.
> 
> 
> On Oct 22, 2012, at 4:28 AM, Tamas Monos <tam...@veber.co.uk> wrote:
> 
>> Hi,
>> 
>> The 3.0.2 support the following VM allocation algorithms:
>> 
>> 'random', 'firstfit', 'userdispersing', 'userconcentratedpod_random', 
>> 'userconcentratedpod_firstfit'
>> 
>> You can configure this in the global configuration options. The default is 
>> random I guess.
>> I don't think CloudStack will detect any of those you require however I 
>> think hypervisors should.
>> Your hypervisor cluster (I use ESX) will detect issues and send alerts, I'm 
>> sure there is something for xen/kvm as well.
>> 
>> Regards
>> 
>> Tamas Monos                                               DDI         
>> +44(0)2034687012
>> Chief Technical                                             Office    
>> +44(0)2034687000
>> Veber: The Hosting Specialists               Fax         +44(0)871 522 7057
>> http://www.veber.co.uk
>> 
>> Follow us on Twitter: www.twitter.com/veberhost Follow us on Facebook: 
>> www.facebook.com/veberhost
>> 
>> -----Original Message-----
>> From: Lucy [mailto:no1l...@gmail.com]
>> Sent: 22 October 2012 10:45
>> To: cloudstack-users@incubator.apache.org
>> Subject: About Allocatoralgorithm of creating VM on Host
>> 
>> Dear all,
>> 
>> 
>> 
>> 
>> 
>> I have a question about allocator algorithm of creating VMs on Host.
>> 
>> 
>> 
>> What's the default allocator algorithm to allocate VM on Host in Cloudstack?
>> 
>> 
>> 
>> And are there any other choice?
>> 
>> 
>> 
>> Can Cloudstack detect heavy loadbalance ,for example one host's CPU/memory 
>> is greater than 80%?
>> 
>> 
>> 
>> Thanks,
>> 
>> Lucy
>> 
>> 
>> 
>> 
> 

Reply via email to