Hi Somesh, 
Thank you for the reply, yes I found the issue in the Jira. 

The affected host was running about 31 VM's and we don't use memory  over 
provisioning at all.  Unfortunately cloudstack fails to restart he VM and it is 
a big issue for us as the restart is a part of a nightly job and we pick it up 
only in the morning.  The only way for us  address the issue is to remove 
'active' tag from the host in question to stop it from being used for starting 
new Vm's. 
I also noticed that the available memory is reported correctly in the xen 
server console, but not inside cloudstack in the management log. 


Best regards 
Yuri 

----------------------------------------
> From: somesh.na...@citrix.com
> To: users@cloudstack.apache.org
> Subject: RE: Unable to start VM due to the HOST_NOT_ENOUGH_FREE_MEMORY error 
> on XenServer
> Date: Mon, 26 Jan 2015 15:56:01 +0000
>
> Yes, seen this quite a few times. I believe you already found CLOUDSTACK-2344.
>
> Basically, the issue happens when cloudstack's view of available memory is 
> out of sync (more) than that of XS. This could happen due to incorrect 
> calculation of memory overhead and Dom0 memory. It is also possible that your 
> memory overprovisioning value is set too high so please verify that. You 
> might also want to check how many VMs were running on that particular host 
> when this error is thrown. When a host is running too many VMs (excess of 60) 
> then there is a possibility for such issues.
>
> Having said that, I believe there are subsequent attempts by cloudstack to 
> start the VM on other hosts and the VM eventually starts. If not then we may 
> be looking at a potential defect.
>
> Regards,
> Somesh
>
>
> -----Original Message-----
> From: Yuri Kogun [mailto:yko...@outlook.com]
> Sent: Monday, January 26, 2015 10:26 AM
> To: users@cloudstack.apache.org
> Subject: Unable to start VM due to the HOST_NOT_ENOUGH_FREE_MEMORY error on 
> XenServer
>
> Hi,
> I wonder if somebody experienced a similar issue. We have a very busy dev 
> cloudstack installation with 10 hosts, 3 clusters and more than 300 user VM's 
> running across the clusters. The cpu over-provisioning is set to 3. From time 
> to time we are getting the following an error when starting the VM.
>
> 2015-01-26 01:45:25,547 WARN [c.c.h.x.r.CitrixResourceBase] 
> (DirectAgent-364:ctx-578a4e5d) Task failed! Task record: uuid: 
> ff81a41e-3340-e7d5-6f8c-c99d4a910bb0
> nameLabel: Async.VM.start_on
> nameDescription:
> allowedOperations: []
> currentOperations: {}
> created: Mon Jan 26 01:45:24 GMT 2015
> finished: Mon Jan 26 01:45:24 GMT 2015
> status: failure
> residentOn: com.xensource.xenapi.Host@fcaebca8
> progress: 1.0
> type: <none/>
> result:
> errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 1587544064, 1446559744]
> otherConfig: {}
> subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
>
>
> I traced the job executor for the VM and it looks like the process have 
> reported that the host have enough RAM 2060627968 to start the VM, which 
> requestes 1572864000 but for some reason the command failed on the Xen server.
>
> 2015-01-26 01:45:20,005 DEBUG [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) 
> Deployment found - P0=VM[User|i-3-160816-VM], 
> P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
>  : 
> Dest[Zone(1)-Pod(1)-Cluster(5)-Host(35)-Storage(Volume(158708|ROOT-->Pool(30))]
> 2015-01-26 01:45:20,124 DEBUG [c.c.c.CapacityManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) VM 
> state transitted from :Starting to Starting with event: OperationRetryvm's 
> original host id: null new host id: 35 host id before state transition: null
> 2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) Hosts's 
> actual total CPU: 44688 and CPU after applying overprovisioning: 58094
> 2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) We are 
> allocating VM, increasing the used capacity of this host:35
> 2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) Current 
> Used CPU: 52500 , Free CPU:5594 ,Requested CPU: 1500
> 2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) Current 
> Used RAM: 61220061184 , Free RAM:2060627968 ,Requested RAM: 1572864000
> 2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) CPU 
> STATS after allocation: for host: 35, old used: 52500, old reserved: 0, 
> actual total: 44688, total with o
> verprovisioning: 58094; new used:54000, reserved:0; requested 
> cpu:1500,alloc_from_last:false
> 2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] 
> (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) RAM 
> STATS after allocation: for host: 35, old used: 61220061184, old reserved: 0, 
> total: 63280689152; new use
> d: 62792925184, reserved: 0; requested mem: 1572864000,alloc_from_last:false
>
>
> Please let me know if somebody had a similar problem and managed to fix it. 
> We are running xenserver6.2 with cloudstack 4.3.0.1
>
> Bast regards
> Yuri
                                          

Reply via email to