[jira] [Commented] (CLOUDSTACK-8923) Create storage network IP range failed, Unknown parameters : zoneid

2015-09-30 Thread Nux (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14938171#comment-14938171
 ] 

Nux commented on CLOUDSTACK-8923:
-

Cheers

Just to be sure, I followed the exact same steps on 4.5.2 and worked great. 
So.. it's not me, it's it. :)

> Create storage network IP range failed, Unknown parameters : zoneid
> ---
>
> Key: CLOUDSTACK-8923
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8923
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs and MGMT
>Reporter: Nux
>Priority: Blocker
>
> I am installing ACS from today's master (3ded3e9 
> http://tmp.nux.ro/acs460snap/ ). 
> Adding an initial zone via the web UI wizard fails at the secondary storage 
> setup stage:
> 2015-09-29 14:07:40,319 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Add job-27 into job monitoring
> 2015-09-29 14:07:40,322 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-314bbaae ctx-2db63923) ===END===  85.13.192.198 -- GET  
> command=createStorageNetworkIpRange=json=192.168.200.67=255.255.255.0=123=192.168.200.200=192.168.200.222=2f0efdcf-adf6-4373-858e-87de6af4cc08=eb7814d2-9a22-4ca4-93af-4a6b8abac67c&_=1443532060283
> 2015-09-29 14:07:40,327 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Executing AsyncJobVO {id:27, 
> userId: 2, accountId: 2, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd,
>  cmdInfo: {"response":"json","ctxDetails":"{\"interface 
> com.cloud.dc.Pod\":\"eb7814d2-9a22-4ca4-93af-4a6b8abac67c\"}","cmdEventType":"STORAGE.IP.RANGE.CREATE","ctxUserId":"2","gateway":"192.168.200.67","podid":"eb7814d2-9a22-4ca4-93af-4a6b8abac67c","zoneid":"2f0efdcf-adf6-4373-858e-87de6af4cc08","startip":"192.168.200.200","vlan":"123","httpmethod":"GET","_":"1443532060283","ctxAccountId":"2","ctxStartEventId":"68","netmask":"255.255.255.0","endip":"192.168.200.222"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 266785867798693, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2015-09-29 14:07:40,330 WARN  [c.c.a.d.ParamGenericValidationWorker] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Received unknown 
> parameters for command createStorageNetworkIpRange. Unknown parameters : 
> zoneid
> 2015-09-29 14:07:40,391 WARN  [o.a.c.a.c.a.n.CreateStorageNetworkIpRangeCmd] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Create storage network 
> IP range failed
> com.cloud.utils.exception.CloudRuntimeException: Unable to commit or close 
> the connection. 
>   at 
> com.cloud.utils.db.TransactionLegacy.commit(TransactionLegacy.java:730)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:46)
>   at 
> com.cloud.network.StorageNetworkManagerImpl.createIpRange(StorageNetworkManagerImpl.java:229)
>   at 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd.execute(CreateStorageNetworkIpRangeCmd.java:118)
>   at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
>   at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: Connection is closed.
>   at 
> 

[jira] [Commented] (CLOUDSTACK-8888) Xenserver 6.0.2 host stuck in disconnected state after upgrade to master

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14938234#comment-14938234
 ] 

ASF GitHub Bot commented on CLOUDSTACK-:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/861#issuecomment-144500362
  
@koushik-das Thanks!

Another benefit: this is an easier change that doesn't require much testing 
(all it does it execute a SQL query at upgrade time).


> Xenserver 6.0.2 host stuck in disconnected state after upgrade to master
> 
>
> Key: CLOUDSTACK-
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Reporter: Harikrishna Patnala
>Assignee: Harikrishna Patnala
> Fix For: 4.6.0
>
>
> hosts running XenServer 6.0.2 are stuck in disconnected state after CS was 
> upgraded to master. I have upgraded the XenServer host to v6.2 but still show 
> in disconnected state.
> It seems Xenserver602resource class is removed but did not handle the 
> existing xenserver 6.0.2 hosts.
> found the below exception during reloading resource.
> 2015-09-21 15:29:19,423 WARN  [c.c.r.DiscovererBase] (ClusteredAgentManager 
> Timer:ctx-d6747f5a) Unable to find class 
> com.cloud.hypervisor.xenserver.resource.XenServer602Resource
> java.lang.ClassNotFoundException: 
> com.cloud.hypervisor.xenserver.resource.XenServer602Resource
>   at 
> org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass(ClassRealm.java:259)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:235)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:227)
>   at 
> org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:401)
>   at 
> org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:190)
>   at com.cloud.resource.DiscovererBase.getResource(DiscovererBase.java:89)
>   at 
> com.cloud.resource.DiscovererBase.reloadResource(DiscovererBase.java:150)
>   at 
> com.cloud.agent.manager.AgentManagerImpl.loadDirectlyConnectedHost(AgentManagerImpl.java:697)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.scanDirectAgentToLoad(ClusteredAgentManagerImpl.java:220)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.runDirectAgentScanTimerTask(ClusteredAgentManagerImpl.java:185)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.access$100(ClusteredAgentManagerImpl.java:99)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl$DirectAgentScanTimerTask.runInContext(ClusteredAgentManagerImpl.java:236)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextTimerTask$1.runInContext(ManagedContextTimerTask.java:30)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedContextTimerTask.java:27)
>   at java.util.TimerThread.mainLoop(Timer.java:555)
>   at java.util.TimerThread.run(Timer.java:505)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread Raja Pullela (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raja Pullela updated CLOUDSTACK-8924:
-
Summary: [Blocker] test duplicated in test_scale_vm.py  (was: [Blocker] 
test duplicated in test_scale.vm.py)

> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
>  'response': 'json', 'command': 'updateVirtualMachine', 'signature': 
> '4dANF6uDGtaOk6jIDb901ES+Oq8=', 'id': u'38c1ced0-693f-4e31-b976-9f4161ac57bb'}
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Sending GET Cmd 
> : updateVirtualMachine===
> urllib3.connectionpool: INFO: Starting new HTTP connection (1): 10.220.135.73
> urllib3.connectionpool: DEBUG: "GET 
> /client/api?isdynamicallyscalable=true=FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w=json=updateVirtualMachine=4dANF6uDGtaOk6jIDb901ES%2BOq8%3D=38c1ced0-693f-4e31-b976-9f4161ac57bb
>  HTTP/1.1" 200 1703
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Response 

[jira] [Commented] (CLOUDSTACK-8888) Xenserver 6.0.2 host stuck in disconnected state after upgrade to master

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936519#comment-14936519
 ] 

ASF GitHub Bot commented on CLOUDSTACK-:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/861#issuecomment-144320963
  
@harikrishna-patnala and @koushik-das I agree with @remibergsma please 
advise on how you want to proceed, considering #883 reimplements the 602 
resource.


> Xenserver 6.0.2 host stuck in disconnected state after upgrade to master
> 
>
> Key: CLOUDSTACK-
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Reporter: Harikrishna Patnala
>Assignee: Harikrishna Patnala
> Fix For: 4.6.0
>
>
> hosts running XenServer 6.0.2 are stuck in disconnected state after CS was 
> upgraded to master. I have upgraded the XenServer host to v6.2 but still show 
> in disconnected state.
> It seems Xenserver602resource class is removed but did not handle the 
> existing xenserver 6.0.2 hosts.
> found the below exception during reloading resource.
> 2015-09-21 15:29:19,423 WARN  [c.c.r.DiscovererBase] (ClusteredAgentManager 
> Timer:ctx-d6747f5a) Unable to find class 
> com.cloud.hypervisor.xenserver.resource.XenServer602Resource
> java.lang.ClassNotFoundException: 
> com.cloud.hypervisor.xenserver.resource.XenServer602Resource
>   at 
> org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass(ClassRealm.java:259)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:235)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:227)
>   at 
> org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:401)
>   at 
> org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:190)
>   at com.cloud.resource.DiscovererBase.getResource(DiscovererBase.java:89)
>   at 
> com.cloud.resource.DiscovererBase.reloadResource(DiscovererBase.java:150)
>   at 
> com.cloud.agent.manager.AgentManagerImpl.loadDirectlyConnectedHost(AgentManagerImpl.java:697)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.scanDirectAgentToLoad(ClusteredAgentManagerImpl.java:220)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.runDirectAgentScanTimerTask(ClusteredAgentManagerImpl.java:185)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.access$100(ClusteredAgentManagerImpl.java:99)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl$DirectAgentScanTimerTask.runInContext(ClusteredAgentManagerImpl.java:236)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextTimerTask$1.runInContext(ManagedContextTimerTask.java:30)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedContextTimerTask.java:27)
>   at java.util.TimerThread.mainLoop(Timer.java:555)
>   at java.util.TimerThread.run(Timer.java:505)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8893) test_vm_snapshots.py requires modification since we support volume snapshot on a vm with vm snapshot

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936526#comment-14936526
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8893:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/871#issuecomment-144322464
  
@pavanb018 it always helps other folks reviewing to know what you did. For 
example. 
Did you just check the Travis green light ? Did you just check the code 
through github ? Did you pull the patch, apply and compile ? Did you do the 
build then run the tests. It may sound obvious to you what you did by giving 
the copy paste form a test, but writing a small friendly sentence for the next 
reviewer will go a long way to help us merge those fixes as a group.

thanks


> test_vm_snapshots.py requires modification since we support volume snapshot 
> on a vm with vm snapshot
> 
>
> Key: CLOUDSTACK-8893
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8893
> Project: CloudStack
>  Issue Type: Test
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.6.0
>Reporter: Sanjeev N
>Assignee: Sanjeev N
>
> test_vm_snapshots.py requires modification since we support volume snapshot 
> on a vm with vm snapshot
> test_01_test_vm_volume_snapshot test from test_vm_snapshots.py expects 
> exception when we try to create volume snapshot on a vm with vm snapshot.
> We need to modify this since we are supporting volume snapshot an a vm with 
> vm snapshots .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread Raja Pullela (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936533#comment-14936533
 ] 

Raja Pullela edited comment on CLOUDSTACK-8924 at 9/30/15 8:21 AM:
---

modified the test "test_01_scale_vm" with required_hardware=false on simulator 
setup and it works.  So, I think we can let the second method 
"test_02_scale_vm_without_hypervisor_specifics" go.  I will also test this on 
XS.

root@localhost:~/cloudstack# nosetests --with-marvin 
--marvin-config=/root/cloudstack/setup/dev/local1.cfg --zone=Sandbox-simulator 
--hypervisor=simulator -a tags=basic,required_hardware=false 
/root/cloudstack/test/integration/smoke/test_scale_vm.py

 Marvin Init Started 

=== Marvin Parse Config Successful ===

=== Marvin Setting TestData Successful===

 Log Folder Path: /tmp//MarvinLogs//Sep_30_2015_08_06_32_AWNF1O. All logs 
will be available here 

=== Marvin Init Logging Successful===

 Marvin Init Successful 
===final results are now copied to: /tmp//MarvinLogs/test_scale_vm_OPS7AD===
root@localhost:~/cloudstack# cd /tmp//MarvinLogs/test_scale_vm_OPS7AD
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls 
failed_plus_exceptions.txt  results.txt  runinfo.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# vi  results.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls -al total 48 drwxr-xr-x 
2 root root  4096 Sep 30 08:07 .
drwxr-xr-x 8 root root  4096 Sep 30 08:06 ..
-rw-r--r-- 1 root root 0 Sep 30 08:06 failed_plus_exceptions.txt
-rw-r--r-- 1 root root   186 Sep 30 08:06 results.txt
-rw-r--r-- 1 root root 36164 Sep 30 08:06 runinfo.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD#
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# cat results.txt Test scale 
virtual machine ... === TestName: test_01_scale_vm | Status : SUCCESS === ok

--
Ran 1 test in 23.455s

OK



was (Author: rajapu):
modified the test "test_01_scale_vm" with required_hardware=false on simulator 
setup and it works.  So, I think we can let the second method go.  I will also 
test this on XS.

root@localhost:~/cloudstack# nosetests --with-marvin 
--marvin-config=/root/cloudstack/setup/dev/local1.cfg --zone=Sandbox-simulator 
--hypervisor=simulator -a tags=basic,required_hardware=false 
/root/cloudstack/test/integration/smoke/test_scale_vm.py

 Marvin Init Started 

=== Marvin Parse Config Successful ===

=== Marvin Setting TestData Successful===

 Log Folder Path: /tmp//MarvinLogs//Sep_30_2015_08_06_32_AWNF1O. All logs 
will be available here 

=== Marvin Init Logging Successful===

 Marvin Init Successful 
===final results are now copied to: /tmp//MarvinLogs/test_scale_vm_OPS7AD===
root@localhost:~/cloudstack# cd /tmp//MarvinLogs/test_scale_vm_OPS7AD
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls 
failed_plus_exceptions.txt  results.txt  runinfo.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# vi  results.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls -al total 48 drwxr-xr-x 
2 root root  4096 Sep 30 08:07 .
drwxr-xr-x 8 root root  4096 Sep 30 08:06 ..
-rw-r--r-- 1 root root 0 Sep 30 08:06 failed_plus_exceptions.txt
-rw-r--r-- 1 root root   186 Sep 30 08:06 results.txt
-rw-r--r-- 1 root root 36164 Sep 30 08:06 runinfo.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD#
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# cat results.txt Test scale 
virtual machine ... === TestName: test_01_scale_vm | Status : SUCCESS === ok

--
Ran 1 test in 23.455s

OK


> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and 

[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread Miguel Ferreira (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936581#comment-14936581
 ] 

Miguel Ferreira commented on CLOUDSTACK-8924:
-

I can confirm that test_01_scale_vm runs on the simulator
{{nosetests --with-marvin --marvin-config=setup/dev/advanced.cfg   
test/integration/smoke/test_scale_vm.py -s -a 
tags=advanced,required_hardware=false   --zone=Sandbox-simulator 
--hypervisor=simulator

 Marvin Init Started 

=== Marvin Parse Config Successful ===

=== Marvin Setting TestData Successful===

 Log Folder Path: /tmp//MarvinLogs//Sep_30_2015_08_46_16_U8QJEI. All logs 
will be available here 

=== Marvin Init Logging Successful===

 Marvin Init Successful 
=== TestName: test_01_scale_vm | Status : SUCCESS ===

=== TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
EXCEPTION ===

===final results are now copied to: /tmp//MarvinLogs/test_scale_vm_4DVZLG===}} 

test_02_scale_vm_without_hypervisor_specifics fails because cleanup between 
tests it not happening

> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> 

[jira] [Commented] (CLOUDSTACK-8882) Network offering usage is sometimes greater than aggregation range

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936503#comment-14936503
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8882:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/859#issuecomment-144316351
  
Hi @kishankavala can we run some simulator tests to check this ?


> Network offering usage is sometimes greater than aggregation range
> --
>
> Key: CLOUDSTACK-8882
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8882
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> Create a Vm with mutiple nics:
>  - If 2 networks use same network offering, network offering usage will be 
> 48hrs (assuming 24hrs aggregation)
> - Usage should be reported per Nic instead of network offering



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8894) Dynamic scaling is not restricted when destination offering has changes in the vGPU type

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936571#comment-14936571
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8894:


Github user anshul1886 commented on the pull request:

https://github.com/apache/cloudstack/pull/868#issuecomment-144328947
  
@runseb Updated the bug description. I am looking into travis tests. This 
test will require vGPU enabled hosts with different type of GPU cards.


> Dynamic scaling is not restricted when destination offering has changes in 
> the vGPU type
> 
>
> Key: CLOUDSTACK-8894
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8894
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> Steps:
> 1.Install and configure XenServer 6.5 with vGPU enabled . Enabled dynamic 
> scaliing 
> 2. Deploy VM using K160Q type windows 7 template with PV tools installaed and 
> dynamic scaling enabled 
> 3. Tried dynamic scaling with offering which has K180Q defined.
> Observation: 
> 1. Currently vGPU resource dynamic scaling is not supported. But CloudStack 
> returns success and updating the VM details with new offering details 
> including new vGPU type. 
> 2. But from Xenserver , There is no change with vGPU type and it remains with 
> old vGPU type. This is not correct
> Expected Result:
> Dynamic scaling should be restricted when source/destination offering has 
> vGPU type on a vGPU enabled VM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale.vm.py

2015-09-30 Thread Raja Pullela (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raja Pullela updated CLOUDSTACK-8924:
-
Affects Version/s: 4.6.0

> [Blocker] test duplicated in test_scale.vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
>  'response': 'json', 'command': 'updateVirtualMachine', 'signature': 
> '4dANF6uDGtaOk6jIDb901ES+Oq8=', 'id': u'38c1ced0-693f-4e31-b976-9f4161ac57bb'}
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Sending GET Cmd 
> : updateVirtualMachine===
> urllib3.connectionpool: INFO: Starting new HTTP connection (1): 10.220.135.73
> urllib3.connectionpool: DEBUG: "GET 
> /client/api?isdynamicallyscalable=true=FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w=json=updateVirtualMachine=4dANF6uDGtaOk6jIDb901ES%2BOq8%3D=38c1ced0-693f-4e31-b976-9f4161ac57bb
>  HTTP/1.1" 200 1703
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Response : {domain : 
> u'ROOT', domainid : u'a6d8fc3a-670a-11e5-8245-96e5a2a4ae9a', haenable 

[jira] [Created] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale.vm.py

2015-09-30 Thread Raja Pullela (JIRA)
Raja Pullela created CLOUDSTACK-8924:


 Summary: [Blocker] test duplicated in test_scale.vm.py
 Key: CLOUDSTACK-8924
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Raja Pullela
Priority: Blocker


This is a blocker because BVTs for XS and Simulator are failing.

Simulator zone - it is failing because
This is a genuine failure - because the setup didn't have Dynamic Scaling 
enabled as part of global settings.  Once it is enabled the tests ran fine.

XS basic/Advzone - it is failing because
the methods 
test_01_scale_vm(self):
test_02_scale_vm_without_hypervisor_specifics(self):

are essentially same with the exception of tags -
first one - test_01_scale_vm - had a "required_hardware=true"
second - test_02_scale_vm_without_hypervisor_specific had a 
"required_hardware=false"

essentially we can get this test run on both Simulator and XenServer by 
modifying the "required_hardware=false". 

and test_02_scale_vm_without_hypervisor_specifc - can be deleted.

The reason for failure on the XS is due to the following - "Not upgrading vm 
VM[User|i-23-28-VM] since it already has the requested service offering"

Following are the logs:
Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
SUCCESS ===
ok
test_02_scale_vm_without_hypervisor_specifics 
(integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
ERROR

==
ERROR: test_02_scale_vm_without_hypervisor_specifics 
(integration.smoke.test_scale_vm.TestScaleVm)
--
Traceback (most recent call last):
  File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, in 
test_02_scale_vm_without_hypervisor_specifics
self.apiclient.scaleVirtualMachine(cmd)
  File 
"/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
 line 797, in scaleVirtualMachine
response = self.connection.marvinRequest(command, response_type=response, 
method=method)
  File "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", 
line 379, in marvinRequest
raise e
Exception: Job failed: {jobprocstatus : 0, created : 
u'2015-09-30T01:16:45+', cmd : 
u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
VM[User|i-23-28-VM] since it already has the requested service offering 
(BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
 >> begin captured stdout << -
=== TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
EXCEPTION ===


- >> end captured stdout << --
 >> begin captured logging << 
test_02_scale_vm_without_hypervisor_specifics 
(integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : TC: 
test_02_scale_vm_without_hypervisor_specifics :::
test_02_scale_vm_without_hypervisor_specifics 
(integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
{'isdynamicallyscalable': 'true', 'apiKey': 
u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
 'response': 'json', 'command': 'updateVirtualMachine', 'signature': 
'4dANF6uDGtaOk6jIDb901ES+Oq8=', 'id': u'38c1ced0-693f-4e31-b976-9f4161ac57bb'}
test_02_scale_vm_without_hypervisor_specifics 
(integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Sending GET Cmd : 
updateVirtualMachine===
urllib3.connectionpool: INFO: Starting new HTTP connection (1): 10.220.135.73
urllib3.connectionpool: DEBUG: "GET 
/client/api?isdynamicallyscalable=true=FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w=json=updateVirtualMachine=4dANF6uDGtaOk6jIDb901ES%2BOq8%3D=38c1ced0-693f-4e31-b976-9f4161ac57bb
 HTTP/1.1" 200 1703
test_02_scale_vm_without_hypervisor_specifics 
(integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Response : {domain : 
u'ROOT', domainid : u'a6d8fc3a-670a-11e5-8245-96e5a2a4ae9a', haenable : False, 
templatename : u'CentOS 5.6(64-bit) no GUI (XenServer)', securitygroup : 
[{egressrule : [], account : u'test-a-TestScaleVm-D0XS7G', name : 
u'basic_sec_grp-9HYGF7', virtualmachineids : [], tags : [], ingressrule : [], 
id : u'544c2c1e-5487-4ed2-8a5a-716477c915c8'}], zoneid : 
u'ca2f3c87-c0ed-44c5-99ad-911287e8ef1d', cpunumber : 1, ostypeid : 142, 
passwordenabled : False, instancename 

[jira] [Updated] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale.vm.py

2015-09-30 Thread Raja Pullela (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raja Pullela updated CLOUDSTACK-8924:
-
Fix Version/s: 4.6.0

> [Blocker] test duplicated in test_scale.vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
>  'response': 'json', 'command': 'updateVirtualMachine', 'signature': 
> '4dANF6uDGtaOk6jIDb901ES+Oq8=', 'id': u'38c1ced0-693f-4e31-b976-9f4161ac57bb'}
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Sending GET Cmd 
> : updateVirtualMachine===
> urllib3.connectionpool: INFO: Starting new HTTP connection (1): 10.220.135.73
> urllib3.connectionpool: DEBUG: "GET 
> /client/api?isdynamicallyscalable=true=FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w=json=updateVirtualMachine=4dANF6uDGtaOk6jIDb901ES%2BOq8%3D=38c1ced0-693f-4e31-b976-9f4161ac57bb
>  HTTP/1.1" 200 1703
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Response : {domain : 
> u'ROOT', domainid : u'a6d8fc3a-670a-11e5-8245-96e5a2a4ae9a', haenable : 
> 

[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936500#comment-14936500
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8880:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/847#issuecomment-144315690
  
Hi @kishankavala if you can answer @remibergsma and @borisroman questions, 
we can move forward with your PR.


> Allocated memory more than total memory on a KVM host
> -
>
> Key: CLOUDSTACK-8880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> With  memory over-provisioning set to 1, when mgmt server starts VMs in 
> parallel on one host, then the memory allocated on that kvm can be larger 
> than the actual physcial memory of the kvm host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8879) Depend on rados-java 0.2.0

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936524#comment-14936524
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8879:


Github user wido commented on the pull request:

https://github.com/apache/cloudstack/pull/889#issuecomment-144322421
  
@runseb No, not really. All my Unit tests are succeeding on rados-java 
itself. It should work as expected. But I have no in CloudStack test


> Depend on rados-java 0.2.0
> --
>
> Key: CLOUDSTACK-8879
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8879
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>Priority: Critical
> Fix For: 4.5.3, 4.6.0
>
>
> Need to depend on rados-java 0.2.0 due to a couple of crashes which have 
> occured.
> Will need some new imports in LibvirtComputingResource, but no major code 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8879) Depend on rados-java 0.2.0

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936498#comment-14936498
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8879:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/889#issuecomment-144315290
  
LGTM, small change.
@wido what tests can be run to check for this.


> Depend on rados-java 0.2.0
> --
>
> Key: CLOUDSTACK-8879
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8879
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>Priority: Critical
> Fix For: 4.5.3, 4.6.0
>
>
> Need to depend on rados-java 0.2.0 due to a couple of crashes which have 
> occured.
> Will need some new imports in LibvirtComputingResource, but no major code 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread Raja Pullela (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936533#comment-14936533
 ] 

Raja Pullela commented on CLOUDSTACK-8924:
--

modified the test "test_01_scale_vm" with required_hardware=false on simulator 
setup and it works.  So, I think we can let the second method go.  I will also 
test this on XS.

root@localhost:~/cloudstack# nosetests --with-marvin 
--marvin-config=/root/cloudstack/setup/dev/local1.cfg --zone=Sandbox-simulator 
--hypervisor=simulator -a tags=basic,required_hardware=false 
/root/cloudstack/test/integration/smoke/test_scale_vm.py

 Marvin Init Started 

=== Marvin Parse Config Successful ===

=== Marvin Setting TestData Successful===

 Log Folder Path: /tmp//MarvinLogs//Sep_30_2015_08_06_32_AWNF1O. All logs 
will be available here 

=== Marvin Init Logging Successful===

 Marvin Init Successful 
===final results are now copied to: /tmp//MarvinLogs/test_scale_vm_OPS7AD===
root@localhost:~/cloudstack# cd /tmp//MarvinLogs/test_scale_vm_OPS7AD
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls 
failed_plus_exceptions.txt  results.txt  runinfo.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# vi  results.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls -al total 48 drwxr-xr-x 
2 root root  4096 Sep 30 08:07 .
drwxr-xr-x 8 root root  4096 Sep 30 08:06 ..
-rw-r--r-- 1 root root 0 Sep 30 08:06 failed_plus_exceptions.txt
-rw-r--r-- 1 root root   186 Sep 30 08:06 results.txt
-rw-r--r-- 1 root root 36164 Sep 30 08:06 runinfo.txt 
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD#
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# cat results.txt Test scale 
virtual machine ... === TestName: test_01_scale_vm | Status : SUCCESS === ok

--
Ran 1 test in 23.455s

OK


> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service 

[jira] [Commented] (CLOUDSTACK-8894) Dynamic scaling is not restricted when destination offering has changes in the vGPU type

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936530#comment-14936530
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8894:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/868#issuecomment-144322859
  
@anshul1886 Can you add a bit of a description here.
Is that fixing a bug, is that a new feature ? Can you add Travis tests to 
check this ?
many thanks


> Dynamic scaling is not restricted when destination offering has changes in 
> the vGPU type
> 
>
> Key: CLOUDSTACK-8894
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8894
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> Steps:
> 1.Install and configure XenServer 6.5 with vGPU enabled . Enabled dynamic 
> scaliing 
> 2. Deploy VM using K160Q type windows 7 template with PV tools installaed and 
> dynamic scaling enabled 
> 3. Tried dynamic scaling with offering which has K180Q defined.
> Observation: 
> 1. Currently vGPU resource dynamic scaling is not supported. But CloudStack 
> returns success and updating the VM details with new offering details 
> including new vGPU type. 
> 2. But from Xenserver , There is no change with vGPU type and it remains with 
> old vGPU type. This is not correct
> Expected Result:
> Dynamic scaling should be restricted when source/destination offering has 
> vGPU type on a vGPU enabled VM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936586#comment-14936586
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8896:


Github user runseb commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/873#discussion_r40771703
  
--- Diff: server/src/com/cloud/storage/StorageManagerImpl.java ---
@@ -1736,7 +1737,10 @@ public boolean 
storagePoolHasEnoughSpace(List volumes, StoragePool pool)
 allocatedSizeWithtemplate = 
_capacityMgr.getAllocatedPoolCapacity(poolVO, tmpl);
 }
 }
-if (volumeVO.getState() != Volume.State.Ready) {
+// A ready state volume is already allocated in a pool. so the 
asking size is zero for it.
+// In case the volume is moving across pools or is not ready 
yet, the asking size has to be computed
+s_logger.debug("pool id for the volume with id: " + 
volumeVO.getId() + " is: " + volumeVO.getPoolId());
--- End diff --

@karuturi waiting for your reply on wido's question :)


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936656#comment-14936656
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8924:


GitHub user sanju1010 opened a pull request:

https://github.com/apache/cloudstack/pull/900

CLOUDSTACK-8924: Removed duplicate test from test_scale_vm.py

Please go through CS-8924 for more details.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sanju1010/cloudstack scale_vm

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/900.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #900


commit 33bdfc773a39d7ff245fb4f50299a7b6cc0391ef
Author: sanjeev 
Date:   2015-09-30T09:53:30Z

CLOUDSTACK-8924: Removed duplicate test from test_scale_vm.py




> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
>  'response': 'json', 'command': 

[jira] [Commented] (CLOUDSTACK-8906) /var/log/cloud/ doesn't get logrotated on xenserver

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936675#comment-14936675
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8906:


Github user koushik-das commented on the pull request:

https://github.com/apache/cloudstack/pull/883#issuecomment-144347154
  
@SudharmaJain Why can't the XenServer600Resource be used to handle XS 6.0.2 
hosts? The only change I see is the patch script 
(scripts/vm/hypervisor/xenserver/xenserver602/patch). If the content of the 
patch file is same as that of scripts/vm/hypervisor/xenserver/xenserver60/patch 
then there is no need for a separate 6.0.2 resource, PR #861 should be used 
instead.


> /var/log/cloud/ doesn't get logrotated on xenserver 
> 
>
> Key: CLOUDSTACK-8906
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8906
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8913) Search box in Templates tab out of alignment

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936712#comment-14936712
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8913:


Github user nitin-maharana commented on the pull request:

https://github.com/apache/cloudstack/pull/891#issuecomment-144361240
  
Hi @runseb, Added two snapshots of before change and after change.

Before Change:

![pr_891_before_change](https://cloud.githubusercontent.com/assets/12583725/10191319/669fd8ce-6790-11e5-9a01-cccb96b7a500.png)

After Change:

![pr_891_after_change](https://cloud.githubusercontent.com/assets/12583725/10191325/7597eb1e-6790-11e5-93fb-bc9cdac2e15b.png)



> Search box in Templates tab out of alignment
> 
>
> Key: CLOUDSTACK-8913
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8913
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.5.1
>Reporter: Nitin Kumar Maharana
>
> CURRENT BEHAVIOUR
> 
> Search box in Templates tab is not aligned with other buttons in Firefox, 
> Chrome, and Safari.
> EXPECTED BEHAVIOUR
> 
> Search box in Templates tab should be aligned with other buttons in all 
> browser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8911) VM start job got stuck in loop looking for suitable host

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936646#comment-14936646
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8911:


Github user koushik-das commented on the pull request:

https://github.com/apache/cloudstack/pull/895#issuecomment-144343011
  
LGTM. Have verified the scenario for max. guest limit.


> VM start job got stuck in loop looking for suitable host
> 
>
> Key: CLOUDSTACK-8911
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8911
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: sudharma jain
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936659#comment-14936659
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8924:


Github user sanju1010 commented on the pull request:

https://github.com/apache/cloudstack/pull/900#issuecomment-144344567
  
root@localhost:~/cloudstack# nosetests --with-marvin 
--marvin-config=/root/cloudstack/setup/dev/local1.cfg --zone=Sandbox-simulator 
--hypervisor=simulator -a tags=basic,required_hardware=false 
/root/cloudstack/test/integration/smoke/test_scale_vm.py

 Marvin Init Started 

=== Marvin Parse Config Successful ===

=== Marvin Setting TestData Successful===

 Log Folder Path: /tmp//MarvinLogs//Sep_30_2015_08_06_32_AWNF1O. All 
logs will be available here 

=== Marvin Init Logging Successful===

 Marvin Init Successful 
===final results are now copied to: /tmp//MarvinLogs/test_scale_vm_OPS7AD===
root@localhost:~/cloudstack# cd /tmp//MarvinLogs/test_scale_vm_OPS7AD
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls
failed_plus_exceptions.txt  results.txt  runinfo.txt
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# vi  results.txt
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# ls -al
total 48
drwxr-xr-x 2 root root  4096 Sep 30 08:07 .
drwxr-xr-x 8 root root  4096 Sep 30 08:06 ..
-rw-r--r-- 1 root root 0 Sep 30 08:06 failed_plus_exceptions.txt
-rw-r--r-- 1 root root   186 Sep 30 08:06 results.txt
-rw-r--r-- 1 root root 36164 Sep 30 08:06 runinfo.txt
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD#
root@localhost:/tmp/MarvinLogs/test_scale_vm_OPS7AD# cat results.txt
Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
SUCCESS ===
ok

--
Ran 1 test in 23.455s

OK


> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : 

[jira] [Commented] (CLOUDSTACK-8848) Unexpected VR reboot after out-of-band migration

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936671#comment-14936671
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8848:


Github user resmo commented on the pull request:

https://github.com/apache/cloudstack/pull/885#issuecomment-144346880
  
well I need it in 4.5.3. I would suggest we take this fix for now for 4.6 
as well and make a proper refactor for 4.7/5.0.


> Unexpected VR reboot after out-of-band migration
> 
>
> Key: CLOUDSTACK-8848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.5.2, 4.6.0
>Reporter: René Moser
>Assignee: René Moser
>Priority: Blocker
> Fix For: 4.5.3, 4.6.0
>
>
> In some conditions (race condition), VR gets rebooted after a out of band 
> migration was done on vCenter. 
> {panel:bgColor=#CE}
> Note, new global setting in 4.5.2 "VR reboot after out of band migration" is 
> set to *false* and this looks more like a bug.
> {panel}
> After a VR migration to a host _and_ when the VM power state report gathering 
> is running, the VR (and also any user VM as well) will get into the 
> "PowerReportMissing".
> If the VM is a VR. it will be powered off and started again on vCenter. That 
> is what we see. In can not be reproduced every time a migration was done, but 
> it seems the problem is related to "powerReportMissing".
> I grep-ed the source and found this line related
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java#L3616
> and also it seems that the graceful period might be also related, 
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachinePowerStateSyncImpl.java#L110
> In case it is a user VM, we see in the logs that the state will be set to 
> powered-off, but the VM keeps running. After a while a new VM power state 
> report is running and the state for the user VM gets updated to Running 
> again. Below the logs 
> h2. VR  r-342-VM reboot log
> {code:none}
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Run missing VM report. current time: 
> 1442302626508
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Detected missing VM. host: 19, vm id: 
> 342, power state: PowerReportMissing, last state update: 1442302506000
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) vm id: 342 - time since last state 
> update(120508ms) has passed graceful period
> 2015-09-15 09:37:06,517 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM state report is updated. host: 19, 
> vm id: 342, power state: PowerReportMissing 
> 2015-09-15 09:37:06,525 INFO  [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM r-342-VM is at Running and we 
> received a power-off report while there is no pending jobs on it
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Sending  { 
> Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Executing:  
> { Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-136:ctx-9bc0a401) Seq 19-4511199451741686482: Executing request
> 2015-09-15 09:37:06,532 INFO  [c.c.h.v.r.VmwareResource] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) Executing resource StopCommand: 
> {"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}
> 2015-09-15 09:37:06,551 DEBUG [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) find VM r-342-VM on host
> 2015-09-15 09:37:06,551 INFO  [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) VM r-342-VM not found in host cache
> 2015-09-15 09:37:06,551 DEBUG 

[jira] [Commented] (CLOUDSTACK-8879) Depend on rados-java 0.2.0

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936672#comment-14936672
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8879:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/889#issuecomment-144346920
  
@borisroman thank you!


> Depend on rados-java 0.2.0
> --
>
> Key: CLOUDSTACK-8879
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8879
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>Priority: Critical
> Fix For: 4.5.3, 4.6.0
>
>
> Need to depend on rados-java 0.2.0 due to a couple of crashes which have 
> occured.
> Will need some new imports in LibvirtComputingResource, but no major code 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8808) Successfully registered VHD template is downloaded again due to missing virtualsize property in template.properties

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936744#comment-14936744
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8808:


Github user borisroman commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/901#discussion_r40784661
  
--- Diff: core/src/com/cloud/storage/template/QCOW2Processor.java ---
@@ -75,6 +76,16 @@ public FormatInfo process(String templatePath, 
ImageFormat format, String templa
 
 @Override
 public long getVirtualSize(File file) throws IOException {
+try {
+long size = getTemplateVirtualSize(file);
--- End diff --

@karuturi Could you please use the QCOW2Utils.getVirtualSize() from the 
com.cloud.utils.storage package instead of using the function defined in the 
QCOW2Processor? :)


> Successfully registered VHD template is downloaded again due to missing 
> virtualsize property in template.properties
> ---
>
> Key: CLOUDSTACK-8808
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8808
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.4.4, 4.6.0
> Environment: Seen on NFS as sec storage
>Reporter: Remi Bergsma
>Assignee: Rajani Karuturi
>Priority: Blocker
>
> We noticed all of our templates are downloaded again as soon as we restart 
> SSVM, its Cloud service or the management server it connects to.
> A scan done by the SSVM (listvmtmplt.sh) returns the template, but it is 
> rejected later (Post download installation was not completed) because (Format 
> is invalid) due to missing virtualSize property in template.properties.
> The initial registration did succeed however. I'd either want the 
> registration to fail, or it to succeed. Not first succeed (and spin VMs 
> without a problem) then fail unexpectedly later.
> This is the script processing the download:
> services/secondary-storage/server/src/org/apache/cloudstack/storage/template/DownloadManagerImpl.java
>  759 private List listTemplates(String rootdir) { 
> 
>  760 List result = new ArrayList();   
> 
>  761  
> 
>  762 Script script = new Script(listTmpltScr, s_logger);  
> 
>  763 script.add("-r", rootdir);   
> For example this becomes:
> ==> /usr/local/cloud/systemvm/scripts/storage/secondary/listvmtmplt.sh -r 
> /mnt/SecStorage/ee8633dd-5dbd-39a3-b3ea-801ca0a20da0
> In this log file, it processes the output:
> less /var/log/cloud/cloud.out
> 2015-09-04 08:39:54,622 WARN  [storage.template.DownloadManagerImpl] 
> (agentRequest-Handler-1:null) Post download installation was not completed 
> for /mnt/SecStorage/ee8633dd-5dbd-39a3-b3ea-801ca0a20da0/template/tmpl/2/1607
> This error message is generated here:
> services/secondary-storage/server/src/org/apache/cloudstack/storage/template/DownloadManagerImpl.java
>  
> 780 List publicTmplts = listTemplates(templateDir);   
>
>  781 for (String tmplt : publicTmplts) {  
> 
>  782 String path = tmplt.substring(0, 
> tmplt.lastIndexOf(File.separator)); 
>  783 TemplateLocation loc = new TemplateLocation(_storage, path); 
> 
>  784 try {
> 
>  785 if (!loc.load()) {   
> 
>  786 s_logger.warn("Post download installation was not 
> completed for " + path);
>  787 // loc.purge();  
> 
>  788 _storage.cleanup(path, templateDir); 
> 
>  789 continue;
> 
>  790 }
> 
>  791 } catch (IOException e) {
> 
>  792 s_logger.warn("Unable to load template location " + 
> path, e);
>  793 continue;
> 
>  794 } 
> In the logs this message is also seen:
> MCCP-ADMIN-1|s-32436-VM CLOUDSTACK: 10:09:17,333  WARN TemplateLocation:196 - 
> Format is invalid 
> It is generated here:
> .//core/src/com/cloud/storage/template/TemplateLocation.java
> 192public boolean addFormat(FormatInfo 

[jira] [Commented] (CLOUDSTACK-8899) baremetal VM deployment via service offering with host tag fail

2015-09-30 Thread sebastien goasguen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936594#comment-14936594
 ] 

sebastien goasguen commented on CLOUDSTACK-8899:


how does this related to #8897 ?

> baremetal VM deployment via service offering with host tag fail
> ---
>
> Key: CLOUDSTACK-8899
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8899
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal, Management Server
>Reporter: Harikrishna Patnala
>Assignee: Harikrishna Patnala
> Fix For: 4.6.0
>
>
> 1. BM zone.
> 2. create service offerings:
> Service offerings hosttag
> --
> bm512SO1 large
> bm512SO2 large2
> bm512SO3 large3
> 3. add 3 hosts with hosttag:
> host IPMI hosttag
> 
> Host1 -> large
> Host2 -> large2
> Host3 -> large3
> 4. deploy S03V33 using service offering bm512SO3 (host tag 'large3') result 
> in host with hosttag 'large' - BUG
> 5. deploy S02V32 using service offering bm512SO2 (host tag 'large2') result 
> in host with hosttag 'large2' - correct
> 6. deploy S02secondV34 using service offering bm512SO2 (host tag 'large2') 
> result in host with hosttag 'large3' - BUG
> 7. destroy S02V32. host with hosttag 'large2' is available resource.
> deploy S01V31 using service offering bm512S01 (host tag 'large') result in 
> host with hosttag 'large2' - BUG
> conclusion: VM deployment using host tag FAIL. VM deployment uses
> available host only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8848) Unexpected VR reboot after out-of-band migration

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936607#comment-14936607
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8848:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/885#issuecomment-144334476
  
I think this will work but it is a fix on a broken state machine. The 
statemachine expects a power report every so and so time-interval and when it 
doesn't come , this is just regarded as an PowerOff event while it is not. I 
don't think a fix for should not go in this PR. We could let this go for now 
and make a better fix though. @resmo you said you were not in a hurry, are you?


> Unexpected VR reboot after out-of-band migration
> 
>
> Key: CLOUDSTACK-8848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.5.2, 4.6.0
>Reporter: René Moser
>Assignee: René Moser
>Priority: Blocker
> Fix For: 4.5.3, 4.6.0
>
>
> In some conditions (race condition), VR gets rebooted after a out of band 
> migration was done on vCenter. 
> {panel:bgColor=#CE}
> Note, new global setting in 4.5.2 "VR reboot after out of band migration" is 
> set to *false* and this looks more like a bug.
> {panel}
> After a VR migration to a host _and_ when the VM power state report gathering 
> is running, the VR (and also any user VM as well) will get into the 
> "PowerReportMissing".
> If the VM is a VR. it will be powered off and started again on vCenter. That 
> is what we see. In can not be reproduced every time a migration was done, but 
> it seems the problem is related to "powerReportMissing".
> I grep-ed the source and found this line related
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java#L3616
> and also it seems that the graceful period might be also related, 
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachinePowerStateSyncImpl.java#L110
> In case it is a user VM, we see in the logs that the state will be set to 
> powered-off, but the VM keeps running. After a while a new VM power state 
> report is running and the state for the user VM gets updated to Running 
> again. Below the logs 
> h2. VR  r-342-VM reboot log
> {code:none}
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Run missing VM report. current time: 
> 1442302626508
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Detected missing VM. host: 19, vm id: 
> 342, power state: PowerReportMissing, last state update: 1442302506000
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) vm id: 342 - time since last state 
> update(120508ms) has passed graceful period
> 2015-09-15 09:37:06,517 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM state report is updated. host: 19, 
> vm id: 342, power state: PowerReportMissing 
> 2015-09-15 09:37:06,525 INFO  [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM r-342-VM is at Running and we 
> received a power-off report while there is no pending jobs on it
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Sending  { 
> Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Executing:  
> { Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-136:ctx-9bc0a401) Seq 19-4511199451741686482: Executing request
> 2015-09-15 09:37:06,532 INFO  [c.c.h.v.r.VmwareResource] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) Executing resource StopCommand: 
> {"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}
> 2015-09-15 09:37:06,551 DEBUG [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 

[jira] [Commented] (CLOUDSTACK-8879) Depend on rados-java 0.2.0

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936670#comment-14936670
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8879:


Github user borisroman commented on the pull request:

https://github.com/apache/cloudstack/pull/889#issuecomment-144346818
  
Will do.


Best regards,

Boris Schrijver

TEL: +31633784542
MAIL: bo...@pcextreme.nl


> On September 30, 2015 at 12:11 PM Remi Bergsma 
> wrote:
> 
> 
> @wido  maybe @borisroman
>  can execute smoke/test_vm_life_cycle.py 
to do
> a quick check?
> 
> —
> Reply to this email directly or view it on GitHub
>  .
> 
>  
> 





> Depend on rados-java 0.2.0
> --
>
> Key: CLOUDSTACK-8879
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8879
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>Priority: Critical
> Fix For: 4.5.3, 4.6.0
>
>
> Need to depend on rados-java 0.2.0 due to a couple of crashes which have 
> occured.
> Will need some new imports in LibvirtComputingResource, but no major code 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8888) Xenserver 6.0.2 host stuck in disconnected state after upgrade to master

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936678#comment-14936678
 ] 

ASF GitHub Bot commented on CLOUDSTACK-:


Github user koushik-das commented on the pull request:

https://github.com/apache/cloudstack/pull/861#issuecomment-144347634
  
I have updated #883 with my comments, looking at the changes it shouldn't 
be required. Lets wait for @SudharmaJain response


> Xenserver 6.0.2 host stuck in disconnected state after upgrade to master
> 
>
> Key: CLOUDSTACK-
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Reporter: Harikrishna Patnala
>Assignee: Harikrishna Patnala
> Fix For: 4.6.0
>
>
> hosts running XenServer 6.0.2 are stuck in disconnected state after CS was 
> upgraded to master. I have upgraded the XenServer host to v6.2 but still show 
> in disconnected state.
> It seems Xenserver602resource class is removed but did not handle the 
> existing xenserver 6.0.2 hosts.
> found the below exception during reloading resource.
> 2015-09-21 15:29:19,423 WARN  [c.c.r.DiscovererBase] (ClusteredAgentManager 
> Timer:ctx-d6747f5a) Unable to find class 
> com.cloud.hypervisor.xenserver.resource.XenServer602Resource
> java.lang.ClassNotFoundException: 
> com.cloud.hypervisor.xenserver.resource.XenServer602Resource
>   at 
> org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass(ClassRealm.java:259)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:235)
>   at 
> org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:227)
>   at 
> org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:401)
>   at 
> org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:190)
>   at com.cloud.resource.DiscovererBase.getResource(DiscovererBase.java:89)
>   at 
> com.cloud.resource.DiscovererBase.reloadResource(DiscovererBase.java:150)
>   at 
> com.cloud.agent.manager.AgentManagerImpl.loadDirectlyConnectedHost(AgentManagerImpl.java:697)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.scanDirectAgentToLoad(ClusteredAgentManagerImpl.java:220)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.runDirectAgentScanTimerTask(ClusteredAgentManagerImpl.java:185)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl.access$100(ClusteredAgentManagerImpl.java:99)
>   at 
> com.cloud.agent.manager.ClusteredAgentManagerImpl$DirectAgentScanTimerTask.runInContext(ClusteredAgentManagerImpl.java:236)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextTimerTask$1.runInContext(ManagedContextTimerTask.java:30)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedContextTimerTask.java:27)
>   at java.util.TimerThread.mainLoop(Timer.java:555)
>   at java.util.TimerThread.run(Timer.java:505)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936685#comment-14936685
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8924:


Github user koushik-das commented on the pull request:

https://github.com/apache/cloudstack/pull/900#issuecomment-144348817
  
LGTM. Since now test_01_scale_vm can be run on simulator, no need for the 
other one.


> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
>  'response': 'json', 'command': 'updateVirtualMachine', 'signature': 
> '4dANF6uDGtaOk6jIDb901ES+Oq8=', 'id': u'38c1ced0-693f-4e31-b976-9f4161ac57bb'}
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Sending GET Cmd 
> : updateVirtualMachine===
> urllib3.connectionpool: INFO: Starting new HTTP connection (1): 10.220.135.73
> urllib3.connectionpool: DEBUG: "GET 
> 

[jira] [Commented] (CLOUDSTACK-8808) Successfully registered VHD template is downloaded again due to missing virtualsize property in template.properties

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936684#comment-14936684
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8808:


GitHub user karuturi opened a pull request:

https://github.com/apache/cloudstack/pull/901

CLOUDSTACK-8808: Successfully registered VHD template is downloaded again 
due to missing virtualsize property in template.properties

We have multiple file processors to process different types of image
formats. The processor interface has two methods getVirtualSize() and
process().

1.  getVirtualSize() as the name says, returns the virtual size of
the file and is used at get the size while copying files from NFS to s3
2.  process() returns FormatInfo struct which has fileType, size,
virutalSize, filename.  on successfully downloading a template, each
file is passed to all the processors.process() and whichever returns a
FormatInfo, that will be used to create template.properties file.  If
process() throws an InternalErrorException, template installation fails.
But, if process() returns null, template registration is successful with
template.properties missing some attributes like virtualSize, file
format etc. which results in this bug on restart of ssvm/cloud
service/management server.

failing the template download if virutalsize or some other properties
cannot be determined.

The following changes are done:
getVirtualSize() to always return size(if it can calculate, get virtual
size else return file size). This would mean the following changes

1. QCOW2Processor.getVirtualSize() to return file size if virtual
size calculation fails
2. VHDProcessor.getVirtualSize() to return file size if virtual size
calculation fails

process() to throw InternalErrorException if virtual size calculation
fails or any other exceptions occur. This would mean the following
changes

1. OVAProcessor to throw InternalErrorException if untar fails
2. QCOW2Processor to throw InternalErrorException if virtual size
calculation fails
3. VHDProcessor to throw InternalErrorException if virtual size
calculation fails

Testing:
added unittests for the changes in the file processors.
manual test:
setup: host xenserver 6.5, management server centos 6.7
template: disk created using the process specified by andy at 
https://issues.apache.org/jira/browse/CLOUDSTACK-8808?focusedCommentId=14933368
tried to register the template and it failed with an error. Template never 
moved to Ready state.
![screen shot 2015-09-30 at 3 53 34 
pm](https://cloud.githubusercontent.com/assets/186833/10190608/76bcce92-678b-11e5-8f52-b449d149300b.png)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/karuturi/cloudstack CLOUDSTACK-8808

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/901.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #901


commit 1056171aca8816492c16c5bdf8f963745f968b5d
Author: Rajani Karuturi 
Date:   2015-09-29T16:25:23Z

CLOUDSTACK-8808: Successfully registered VHD template is downloaded
again due to missing virtualsize property in template.properties

We have multiple file processors to process different types of image
formats. The processor interface has two methods getVirtualSize() and
process().

1. getVirtualSize() as the name says, returns the virtual size of
the file and is used at get the size while copying files from NFS to s3
2. process() returns FormatInfo struct which has fileType, size,
virutalSize, filename.  on successfully downloading a template, each
file is passed to all the processors.process() and whichever returns a
FormatInfo, that will be used to create template.properties file.  If
process() throws an InternalErrorException, template installation fails.
But, if process() returns null, template registration is successful with
template.properties missing some attributes like virtualSize, file
format etc. which results in this bug on restart of ssvm/cloud
service/management server.

failing the template download if virutalsize or some other properties
cannot be determined.

The following changes are done:
getVirtualSize() to always return size(if it can calculate, get virtual
size else return file size). This would mean the following changes

1. QCOW2Processor.getVirtualSize() to return file size if virtual
size calculation fails
2. VHDProcessor.getVirtualSize() to return file size if virtual size

[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread Raja Pullela (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936587#comment-14936587
 ] 

Raja Pullela commented on CLOUDSTACK-8924:
--

[~miguel.cd.ferreira] yes, you are right.  

I feel that "test_02_scale_vm_without_hypervisor_specifics" looks duplicate to 
"test_01_scale_vm".  
Just modify the "test_01_scale_vm"  attributes/tags - "required_hardware=true"  
to "false" and delete the "test_02_scale_vm_without_hypervisor_specifics".   

> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
>  'response': 'json', 'command': 'updateVirtualMachine', 'signature': 
> '4dANF6uDGtaOk6jIDb901ES+Oq8=', 'id': u'38c1ced0-693f-4e31-b976-9f4161ac57bb'}
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Sending GET Cmd 
> : updateVirtualMachine===
> urllib3.connectionpool: INFO: Starting new HTTP connection (1): 10.220.135.73
> urllib3.connectionpool: DEBUG: "GET 
> 

[jira] [Commented] (CLOUDSTACK-8848) Unexpected VR reboot after out-of-band migration

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936602#comment-14936602
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8848:


Github user DaanHoogland commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/885#discussion_r40772428
  
--- Diff: engine/schema/src/com/cloud/vm/dao/VMInstanceDaoImpl.java ---
@@ -805,6 +805,12 @@ public Boolean doInTransaction(TransactionStatus 
status) {
 }
 
 @Override
+public boolean isPowerStateUpToDate(final long instanceId) {
+VMInstanceVO instance = findById(instanceId);
+return instance.getPowerStateUpdateCount() < 
MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT;
--- End diff --

Are we sure instance in not null at this point?


> Unexpected VR reboot after out-of-band migration
> 
>
> Key: CLOUDSTACK-8848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.5.2, 4.6.0
>Reporter: René Moser
>Assignee: René Moser
>Priority: Blocker
> Fix For: 4.5.3, 4.6.0
>
>
> In some conditions (race condition), VR gets rebooted after a out of band 
> migration was done on vCenter. 
> {panel:bgColor=#CE}
> Note, new global setting in 4.5.2 "VR reboot after out of band migration" is 
> set to *false* and this looks more like a bug.
> {panel}
> After a VR migration to a host _and_ when the VM power state report gathering 
> is running, the VR (and also any user VM as well) will get into the 
> "PowerReportMissing".
> If the VM is a VR. it will be powered off and started again on vCenter. That 
> is what we see. In can not be reproduced every time a migration was done, but 
> it seems the problem is related to "powerReportMissing".
> I grep-ed the source and found this line related
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java#L3616
> and also it seems that the graceful period might be also related, 
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachinePowerStateSyncImpl.java#L110
> In case it is a user VM, we see in the logs that the state will be set to 
> powered-off, but the VM keeps running. After a while a new VM power state 
> report is running and the state for the user VM gets updated to Running 
> again. Below the logs 
> h2. VR  r-342-VM reboot log
> {code:none}
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Run missing VM report. current time: 
> 1442302626508
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Detected missing VM. host: 19, vm id: 
> 342, power state: PowerReportMissing, last state update: 1442302506000
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) vm id: 342 - time since last state 
> update(120508ms) has passed graceful period
> 2015-09-15 09:37:06,517 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM state report is updated. host: 19, 
> vm id: 342, power state: PowerReportMissing 
> 2015-09-15 09:37:06,525 INFO  [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM r-342-VM is at Running and we 
> received a power-off report while there is no pending jobs on it
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Sending  { 
> Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Executing:  
> { Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-136:ctx-9bc0a401) Seq 19-4511199451741686482: Executing request
> 2015-09-15 09:37:06,532 INFO  [c.c.h.v.r.VmwareResource] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) Executing resource StopCommand: 
> 

[jira] [Created] (CLOUDSTACK-8925) Default allow for Egress rules is not being configured properly in VR iptables rules

2015-09-30 Thread Pavan Kumar Bandarupally (JIRA)
Pavan Kumar Bandarupally created CLOUDSTACK-8925:


 Summary: Default allow for Egress rules is not being configured 
properly in VR iptables rules
 Key: CLOUDSTACK-8925
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8925
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Virtual Router
Affects Versions: 4.6.0
Reporter: Pavan Kumar Bandarupally
Priority: Critical
 Fix For: 4.6.0



When we create a network with Egress rules set to default allow, the rules 
created in FW_OUTBOUND table should have a reference to FW_EGRESS_RULES chain 
which has a rule to accept NEW packets from the guest instances. Without that 
rule only RELATED , ESTABLISHED rule in FW_OUTBOUND chain will result in Drop 
of packets.


Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   destination
   44  2832 NETWORK_STATS  all  --  *  *   0.0.0.0/0
0.0.0.0/0
0 0 ACCEPT all  --  eth0   eth10.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED
0 0 ACCEPT all  --  eth0   eth00.0.0.0/00.0.0.0/0   
 state NEW
4   336 ACCEPT all  --  eth2   eth00.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED
0 0 ACCEPT all  --  eth0   eth00.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED
   40  2496 FW_OUTBOUND  all  --  eth0   eth20.0.0.0/00.0.0.0/0

Chain OUTPUT (policy ACCEPT 20 packets, 1888 bytes)
 pkts bytes target prot opt in out source   destination
 2498  369K NETWORK_STATS  all  --  *  *   0.0.0.0/0
0.0.0.0/0

Chain FIREWALL_EGRESS_RULES (0 references)
 pkts bytes target prot opt in out source   destination

Chain FW_OUTBOUND (1 references)
 pkts bytes target prot opt in out source   destination
3   252 ACCEPT all  --  *  *   0.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8894) Dynamic scaling is not restricted when destination offering has changes in the vGPU type

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936644#comment-14936644
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8894:


Github user anshul1886 commented on the pull request:

https://github.com/apache/cloudstack/pull/868#issuecomment-144341943
  
@runseb It seems like simulator does't have support for vGPU. If I find 
some time then I will try to add support for it.


> Dynamic scaling is not restricted when destination offering has changes in 
> the vGPU type
> 
>
> Key: CLOUDSTACK-8894
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8894
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> Steps:
> 1.Install and configure XenServer 6.5 with vGPU enabled . Enabled dynamic 
> scaliing 
> 2. Deploy VM using K160Q type windows 7 template with PV tools installaed and 
> dynamic scaling enabled 
> 3. Tried dynamic scaling with offering which has K180Q defined.
> Observation: 
> 1. Currently vGPU resource dynamic scaling is not supported. But CloudStack 
> returns success and updating the VM details with new offering details 
> including new vGPU type. 
> 2. But from Xenserver , There is no change with vGPU type and it remains with 
> old vGPU type. This is not correct
> Expected Result:
> Dynamic scaling should be restricted when source/destination offering has 
> vGPU type on a vGPU enabled VM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936665#comment-14936665
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8924:


Github user pvr9711 commented on the pull request:

https://github.com/apache/cloudstack/pull/900#issuecomment-144346102
  
LGTM!  Thanks Sanjeev!


> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Payload: 
> {'isdynamicallyscalable': 'true', 'apiKey': 
> u'FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w',
>  'response': 'json', 'command': 'updateVirtualMachine', 'signature': 
> '4dANF6uDGtaOk6jIDb901ES+Oq8=', 'id': u'38c1ced0-693f-4e31-b976-9f4161ac57bb'}
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: Sending GET Cmd 
> : updateVirtualMachine===
> urllib3.connectionpool: INFO: Starting new HTTP connection (1): 10.220.135.73
> urllib3.connectionpool: DEBUG: "GET 
> /client/api?isdynamicallyscalable=true=FI3p7aHiRMfWK_oV_T9_i8uY-YegVuIR3mvV7pS3w7s_2-krRV-GMGXoBoVm0454fiZt6FgwOH86gEPenLox0w=json=updateVirtualMachine=4dANF6uDGtaOk6jIDb901ES%2BOq8%3D=38c1ced0-693f-4e31-b976-9f4161ac57bb
>  HTTP/1.1" 200 1703
> 

[jira] [Commented] (CLOUDSTACK-8656) fill empty catch blocks with info messages

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936721#comment-14936721
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8656:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/850#issuecomment-144363576
  
@borisroman the expeted attribute is not there and as I understand with 
reason. The exceptions are part of the contract and need to be handled by the 
client. The test needs to verify the result irrespectively. @koushik-das Do I 
formulate this correct?


> fill empty catch blocks with info messages
> --
>
> Key: CLOUDSTACK-8656
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8656
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Daan Hoogland
>Assignee: Daan Hoogland
> Fix For: 4.6.0
>
>
> operators and other trouble shooters need to know if unhandled exceptions 
> happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8847) ListServiceOfferings is returning incompatible tagged offerings when called with VM id

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936597#comment-14936597
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8847:


Github user nitin-maharana commented on the pull request:

https://github.com/apache/cloudstack/pull/823#issuecomment-144333001
  
Hi @remibergsma,
I rebased my commit against the current master. 
I added unit test for the change.

Thanks,
Nitin


> ListServiceOfferings is returning incompatible tagged offerings when called 
> with VM id
> --
>
> Key: CLOUDSTACK-8847
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8847
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> When calling listServiceOfferings with VM id as parameter. It is returning 
> incompatible tagged offerings. It should only list all compatible tagged 
> offerings. The new service offering should contain all the tags of the 
> existing service offering. If that is the case It should list in the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8923) Create storage network IP range failed, Unknown parameters : zoneid

2015-09-30 Thread Remi Bergsma (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936658#comment-14936658
 ] 

Remi Bergsma commented on CLOUDSTACK-8923:
--

Thanks [~nuxro] will look into it soon. Can you also post the CloudMonkey 
command please?

> Create storage network IP range failed, Unknown parameters : zoneid
> ---
>
> Key: CLOUDSTACK-8923
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8923
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs and MGMT
>Reporter: Nux
>Priority: Blocker
>
> I am installing ACS from today's master (3ded3e9 
> http://tmp.nux.ro/acs460snap/ ). 
> Adding an initial zone via the web UI wizard fails at the secondary storage 
> setup stage:
> 2015-09-29 14:07:40,319 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Add job-27 into job monitoring
> 2015-09-29 14:07:40,322 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-314bbaae ctx-2db63923) ===END===  85.13.192.198 -- GET  
> command=createStorageNetworkIpRange=json=192.168.200.67=255.255.255.0=123=192.168.200.200=192.168.200.222=2f0efdcf-adf6-4373-858e-87de6af4cc08=eb7814d2-9a22-4ca4-93af-4a6b8abac67c&_=1443532060283
> 2015-09-29 14:07:40,327 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Executing AsyncJobVO {id:27, 
> userId: 2, accountId: 2, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd,
>  cmdInfo: {"response":"json","ctxDetails":"{\"interface 
> com.cloud.dc.Pod\":\"eb7814d2-9a22-4ca4-93af-4a6b8abac67c\"}","cmdEventType":"STORAGE.IP.RANGE.CREATE","ctxUserId":"2","gateway":"192.168.200.67","podid":"eb7814d2-9a22-4ca4-93af-4a6b8abac67c","zoneid":"2f0efdcf-adf6-4373-858e-87de6af4cc08","startip":"192.168.200.200","vlan":"123","httpmethod":"GET","_":"1443532060283","ctxAccountId":"2","ctxStartEventId":"68","netmask":"255.255.255.0","endip":"192.168.200.222"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 266785867798693, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2015-09-29 14:07:40,330 WARN  [c.c.a.d.ParamGenericValidationWorker] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Received unknown 
> parameters for command createStorageNetworkIpRange. Unknown parameters : 
> zoneid
> 2015-09-29 14:07:40,391 WARN  [o.a.c.a.c.a.n.CreateStorageNetworkIpRangeCmd] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Create storage network 
> IP range failed
> com.cloud.utils.exception.CloudRuntimeException: Unable to commit or close 
> the connection. 
>   at 
> com.cloud.utils.db.TransactionLegacy.commit(TransactionLegacy.java:730)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:46)
>   at 
> com.cloud.network.StorageNetworkManagerImpl.createIpRange(StorageNetworkManagerImpl.java:229)
>   at 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd.execute(CreateStorageNetworkIpRangeCmd.java:118)
>   at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
>   at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: Connection is closed.
>   at 
> 

[jira] [Commented] (CLOUDSTACK-8879) Depend on rados-java 0.2.0

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936667#comment-14936667
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8879:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/889#issuecomment-144346492
  
@wido maybe @borisroman can execute smoke/test_vm_life_cycle.py to do a 
quick check? 


> Depend on rados-java 0.2.0
> --
>
> Key: CLOUDSTACK-8879
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8879
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>Priority: Critical
> Fix For: 4.5.3, 4.6.0
>
>
> Need to depend on rados-java 0.2.0 due to a couple of crashes which have 
> occured.
> Will need some new imports in LibvirtComputingResource, but no major code 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8923) Create storage network IP range failed, Unknown parameters : zoneid

2015-09-30 Thread Nux (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936681#comment-14936681
 ] 

Nux commented on CLOUDSTACK-8923:
-

"create storagenetworkiprange gateway=192.168.200.67 netmask=255.255.255.0 
vlan=123 startip=192.168.200.200 endip=192.168.200.222 
podid=6fecf11a-964d-4f3c-aebf-3789219851eb"

I'll erase the zone and try again today.

> Create storage network IP range failed, Unknown parameters : zoneid
> ---
>
> Key: CLOUDSTACK-8923
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8923
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs and MGMT
>Reporter: Nux
>Priority: Blocker
>
> I am installing ACS from today's master (3ded3e9 
> http://tmp.nux.ro/acs460snap/ ). 
> Adding an initial zone via the web UI wizard fails at the secondary storage 
> setup stage:
> 2015-09-29 14:07:40,319 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Add job-27 into job monitoring
> 2015-09-29 14:07:40,322 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-314bbaae ctx-2db63923) ===END===  85.13.192.198 -- GET  
> command=createStorageNetworkIpRange=json=192.168.200.67=255.255.255.0=123=192.168.200.200=192.168.200.222=2f0efdcf-adf6-4373-858e-87de6af4cc08=eb7814d2-9a22-4ca4-93af-4a6b8abac67c&_=1443532060283
> 2015-09-29 14:07:40,327 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Executing AsyncJobVO {id:27, 
> userId: 2, accountId: 2, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd,
>  cmdInfo: {"response":"json","ctxDetails":"{\"interface 
> com.cloud.dc.Pod\":\"eb7814d2-9a22-4ca4-93af-4a6b8abac67c\"}","cmdEventType":"STORAGE.IP.RANGE.CREATE","ctxUserId":"2","gateway":"192.168.200.67","podid":"eb7814d2-9a22-4ca4-93af-4a6b8abac67c","zoneid":"2f0efdcf-adf6-4373-858e-87de6af4cc08","startip":"192.168.200.200","vlan":"123","httpmethod":"GET","_":"1443532060283","ctxAccountId":"2","ctxStartEventId":"68","netmask":"255.255.255.0","endip":"192.168.200.222"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 266785867798693, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2015-09-29 14:07:40,330 WARN  [c.c.a.d.ParamGenericValidationWorker] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Received unknown 
> parameters for command createStorageNetworkIpRange. Unknown parameters : 
> zoneid
> 2015-09-29 14:07:40,391 WARN  [o.a.c.a.c.a.n.CreateStorageNetworkIpRangeCmd] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Create storage network 
> IP range failed
> com.cloud.utils.exception.CloudRuntimeException: Unable to commit or close 
> the connection. 
>   at 
> com.cloud.utils.db.TransactionLegacy.commit(TransactionLegacy.java:730)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:46)
>   at 
> com.cloud.network.StorageNetworkManagerImpl.createIpRange(StorageNetworkManagerImpl.java:229)
>   at 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd.execute(CreateStorageNetworkIpRangeCmd.java:118)
>   at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
>   at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: Connection is closed.
>

[jira] [Commented] (CLOUDSTACK-8848) Unexpected VR reboot after out-of-band migration

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936714#comment-14936714
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8848:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/885#issuecomment-144361517
  
ok, no further comment (please see the possible null pointer one)


> Unexpected VR reboot after out-of-band migration
> 
>
> Key: CLOUDSTACK-8848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.5.2, 4.6.0
>Reporter: René Moser
>Assignee: René Moser
>Priority: Blocker
> Fix For: 4.5.3, 4.6.0
>
>
> In some conditions (race condition), VR gets rebooted after a out of band 
> migration was done on vCenter. 
> {panel:bgColor=#CE}
> Note, new global setting in 4.5.2 "VR reboot after out of band migration" is 
> set to *false* and this looks more like a bug.
> {panel}
> After a VR migration to a host _and_ when the VM power state report gathering 
> is running, the VR (and also any user VM as well) will get into the 
> "PowerReportMissing".
> If the VM is a VR. it will be powered off and started again on vCenter. That 
> is what we see. In can not be reproduced every time a migration was done, but 
> it seems the problem is related to "powerReportMissing".
> I grep-ed the source and found this line related
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java#L3616
> and also it seems that the graceful period might be also related, 
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachinePowerStateSyncImpl.java#L110
> In case it is a user VM, we see in the logs that the state will be set to 
> powered-off, but the VM keeps running. After a while a new VM power state 
> report is running and the state for the user VM gets updated to Running 
> again. Below the logs 
> h2. VR  r-342-VM reboot log
> {code:none}
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Run missing VM report. current time: 
> 1442302626508
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Detected missing VM. host: 19, vm id: 
> 342, power state: PowerReportMissing, last state update: 1442302506000
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) vm id: 342 - time since last state 
> update(120508ms) has passed graceful period
> 2015-09-15 09:37:06,517 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM state report is updated. host: 19, 
> vm id: 342, power state: PowerReportMissing 
> 2015-09-15 09:37:06,525 INFO  [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM r-342-VM is at Running and we 
> received a power-off report while there is no pending jobs on it
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Sending  { 
> Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Executing:  
> { Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-136:ctx-9bc0a401) Seq 19-4511199451741686482: Executing request
> 2015-09-15 09:37:06,532 INFO  [c.c.h.v.r.VmwareResource] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) Executing resource StopCommand: 
> {"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}
> 2015-09-15 09:37:06,551 DEBUG [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) find VM r-342-VM on host
> 2015-09-15 09:37:06,551 INFO  [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) VM r-342-VM not found in host cache
> 2015-09-15 09:37:06,551 DEBUG [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 

[jira] [Commented] (CLOUDSTACK-8913) Search box in Templates tab out of alignment

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936715#comment-14936715
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8913:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/891#issuecomment-144361701
  
@nitin-maharana thanks for this. 
LGTM +1 based on code review and look at snapshot.
Somehow might want to compile and test fresh.


> Search box in Templates tab out of alignment
> 
>
> Key: CLOUDSTACK-8913
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8913
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.5.1
>Reporter: Nitin Kumar Maharana
>
> CURRENT BEHAVIOUR
> 
> Search box in Templates tab is not aligned with other buttons in Firefox, 
> Chrome, and Safari.
> EXPECTED BEHAVIOUR
> 
> Search box in Templates tab should be aligned with other buttons in all 
> browser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8895) Verify if storage can be selected when attaching uploaded data volume to VM

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936583#comment-14936583
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8895:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/869#issuecomment-144331549
  
@pritisarap12 can this be run via simulator ?


> Verify if storage can be selected when attaching uploaded data volume to VM
> ---
>
> Key: CLOUDSTACK-8895
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8895
> Project: CloudStack
>  Issue Type: Test
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.2.1
>Reporter: Priti Sarap
> Fix For: 4.2.1
>
>
> Test case to verify if data volume uploaded in a storage pool is available 
> for attachment to a Virtual Machine.and also check that after attachment the 
> volume is in correct storage pool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8897) baremetal:addHost:make host tag info mandtory in baremetal addhost Api call

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936591#comment-14936591
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8897:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/874#issuecomment-144332355
  
We won't be able to test this on simulator.
@harikrishna-patnala can you answer @borisroman and then we can merge.
LGTM +1 on code review alone


> baremetal:addHost:make host tag info mandtory in baremetal addhost Api call
> ---
>
> Key: CLOUDSTACK-8897
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8897
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal, Management Server
>Reporter: Harikrishna Patnala
>Assignee: Harikrishna Patnala
> Fix For: 4.6.0
>
>
> Right now in baremetal, addhost api is successful with out providing the host 
> tag info and we recommend host tag is mandatory for bare-metal.
> in the current implementation host tag check is happening at vm deployment 
> stage but it will be good to have host tag field as mandatory field during 
> adding of the host it self.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8913) Search box in Templates tab out of alignment

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936605#comment-14936605
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8913:


Github user runseb commented on the pull request:

https://github.com/apache/cloudstack/pull/891#issuecomment-144333775
  
@nitin-maharana LGTM +1 based on code review.
Maybe adding two snapshots to compare would help review (just a thought)


> Search box in Templates tab out of alignment
> 
>
> Key: CLOUDSTACK-8913
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8913
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.5.1
>Reporter: Nitin Kumar Maharana
>
> CURRENT BEHAVIOUR
> 
> Search box in Templates tab is not aligned with other buttons in Firefox, 
> Chrome, and Safari.
> EXPECTED BEHAVIOUR
> 
> Search box in Templates tab should be aligned with other buttons in all 
> browser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8808) Successfully registered VHD template is downloaded again due to missing virtualsize property in template.properties

2015-09-30 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8808:

Status: Reviewable  (was: In Progress)

> Successfully registered VHD template is downloaded again due to missing 
> virtualsize property in template.properties
> ---
>
> Key: CLOUDSTACK-8808
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8808
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.4.4, 4.6.0
> Environment: Seen on NFS as sec storage
>Reporter: Remi Bergsma
>Assignee: Rajani Karuturi
>Priority: Blocker
>
> We noticed all of our templates are downloaded again as soon as we restart 
> SSVM, its Cloud service or the management server it connects to.
> A scan done by the SSVM (listvmtmplt.sh) returns the template, but it is 
> rejected later (Post download installation was not completed) because (Format 
> is invalid) due to missing virtualSize property in template.properties.
> The initial registration did succeed however. I'd either want the 
> registration to fail, or it to succeed. Not first succeed (and spin VMs 
> without a problem) then fail unexpectedly later.
> This is the script processing the download:
> services/secondary-storage/server/src/org/apache/cloudstack/storage/template/DownloadManagerImpl.java
>  759 private List listTemplates(String rootdir) { 
> 
>  760 List result = new ArrayList();   
> 
>  761  
> 
>  762 Script script = new Script(listTmpltScr, s_logger);  
> 
>  763 script.add("-r", rootdir);   
> For example this becomes:
> ==> /usr/local/cloud/systemvm/scripts/storage/secondary/listvmtmplt.sh -r 
> /mnt/SecStorage/ee8633dd-5dbd-39a3-b3ea-801ca0a20da0
> In this log file, it processes the output:
> less /var/log/cloud/cloud.out
> 2015-09-04 08:39:54,622 WARN  [storage.template.DownloadManagerImpl] 
> (agentRequest-Handler-1:null) Post download installation was not completed 
> for /mnt/SecStorage/ee8633dd-5dbd-39a3-b3ea-801ca0a20da0/template/tmpl/2/1607
> This error message is generated here:
> services/secondary-storage/server/src/org/apache/cloudstack/storage/template/DownloadManagerImpl.java
>  
> 780 List publicTmplts = listTemplates(templateDir);   
>
>  781 for (String tmplt : publicTmplts) {  
> 
>  782 String path = tmplt.substring(0, 
> tmplt.lastIndexOf(File.separator)); 
>  783 TemplateLocation loc = new TemplateLocation(_storage, path); 
> 
>  784 try {
> 
>  785 if (!loc.load()) {   
> 
>  786 s_logger.warn("Post download installation was not 
> completed for " + path);
>  787 // loc.purge();  
> 
>  788 _storage.cleanup(path, templateDir); 
> 
>  789 continue;
> 
>  790 }
> 
>  791 } catch (IOException e) {
> 
>  792 s_logger.warn("Unable to load template location " + 
> path, e);
>  793 continue;
> 
>  794 } 
> In the logs this message is also seen:
> MCCP-ADMIN-1|s-32436-VM CLOUDSTACK: 10:09:17,333  WARN TemplateLocation:196 - 
> Format is invalid 
> It is generated here:
> .//core/src/com/cloud/storage/template/TemplateLocation.java
> 192public boolean addFormat(FormatInfo newInfo) { 
>   
> 193 deleteFormat(newInfo.format); 
>
> 194   
>
> 195 if (!checkFormatValidity(newInfo)) {  
>
> 196 s_logger.warn("Format is invalid ");  
>
> 197 return false; 
>
> 198 } 
>
> 199   
>
> 200 

[jira] [Created] (CLOUDSTACK-8926) blacklisting particular IPv6 addresses is not possible

2015-09-30 Thread Stephan Seitz (JIRA)
Stephan Seitz created CLOUDSTACK-8926:
-

 Summary: blacklisting particular IPv6 addresses is not possible
 Key: CLOUDSTACK-8926
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8926
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: API, Network Controller
Affects Versions: 4.5.1
Reporter: Stephan Seitz
Priority: Minor
 Fix For: Future


When offering IPv6 in shared networks, there are use-cases to "blacklist" 
particular IPv6 addresses from being offered by the VirtualRouter's dhcp6.
It could be mitigated by narrowiing the start/end-addresses, IF the particular 
addresses are relatively close.
Anyway, a table or structure for "externally used" addresses not to be provided 
by the VR could really be handy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8848) Unexpected VR reboot after out-of-band migration

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936766#comment-14936766
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8848:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/885#issuecomment-144377723
  
@resmo Just a heads-up that I am testing this as we speak. Running the BVT 
tests against this branch to verify it all works. Once everything is done I'll 
post results. It takes a while for it all to run.


> Unexpected VR reboot after out-of-band migration
> 
>
> Key: CLOUDSTACK-8848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.5.2, 4.6.0
>Reporter: René Moser
>Assignee: René Moser
>Priority: Blocker
> Fix For: 4.5.3, 4.6.0
>
>
> In some conditions (race condition), VR gets rebooted after a out of band 
> migration was done on vCenter. 
> {panel:bgColor=#CE}
> Note, new global setting in 4.5.2 "VR reboot after out of band migration" is 
> set to *false* and this looks more like a bug.
> {panel}
> After a VR migration to a host _and_ when the VM power state report gathering 
> is running, the VR (and also any user VM as well) will get into the 
> "PowerReportMissing".
> If the VM is a VR. it will be powered off and started again on vCenter. That 
> is what we see. In can not be reproduced every time a migration was done, but 
> it seems the problem is related to "powerReportMissing".
> I grep-ed the source and found this line related
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java#L3616
> and also it seems that the graceful period might be also related, 
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachinePowerStateSyncImpl.java#L110
> In case it is a user VM, we see in the logs that the state will be set to 
> powered-off, but the VM keeps running. After a while a new VM power state 
> report is running and the state for the user VM gets updated to Running 
> again. Below the logs 
> h2. VR  r-342-VM reboot log
> {code:none}
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Run missing VM report. current time: 
> 1442302626508
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) Detected missing VM. host: 19, vm id: 
> 342, power state: PowerReportMissing, last state update: 1442302506000
> 2015-09-15 09:37:06,508 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) vm id: 342 - time since last state 
> update(120508ms) has passed graceful period
> 2015-09-15 09:37:06,517 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM state report is updated. host: 19, 
> vm id: 342, power state: PowerReportMissing 
> 2015-09-15 09:37:06,525 INFO  [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-253:ctx-c4f59216) VM r-342-VM is at Running and we 
> received a power-off report while there is no pending jobs on it
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Sending  { 
> Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.t.Request] 
> (DirectAgentCronJob-253:ctx-c4f59216) Seq 19-4511199451741686482: Executing:  
> { Cmd , MgmtId: 345051122106, via: 19(cu01-testpod01-esx03.stxt.media.int), 
> Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}}]
>  }
> 2015-09-15 09:37:06,532 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-136:ctx-9bc0a401) Seq 19-4511199451741686482: Executing request
> 2015-09-15 09:37:06,532 INFO  [c.c.h.v.r.VmwareResource] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) Executing resource StopCommand: 
> {"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":true,"vmName":"r-342-VM","wait":0}
> 2015-09-15 09:37:06,551 DEBUG [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> StopCommand) find VM r-342-VM on host
> 2015-09-15 09:37:06,551 INFO  [c.c.h.v.m.HostMO] 
> (DirectAgent-136:ctx-9bc0a401 cu01-testpod01-esx03.stxt.media.int, cmd: 
> 

[jira] [Commented] (CLOUDSTACK-8656) fill empty catch blocks with info messages

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936787#comment-14936787
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8656:


Github user koushik-das commented on the pull request:

https://github.com/apache/cloudstack/pull/850#issuecomment-144384074
  
@DaanHoogland @borisroman The test needs to invoke a protected method from 
a class and so is done using reflection. The test case already asserts for the 
method not being null. Now reflection methods throws some standard exceptions 
which either needs to be handled or thrown as per Java method contract. Since 
the null check is already there, the exceptions can be safely ignored.


> fill empty catch blocks with info messages
> --
>
> Key: CLOUDSTACK-8656
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8656
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Daan Hoogland
>Assignee: Daan Hoogland
> Fix For: 4.6.0
>
>
> operators and other trouble shooters need to know if unhandled exceptions 
> happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8926) blacklisting particular IPv6 addresses is not possible

2015-09-30 Thread Stephan Seitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Seitz updated CLOUDSTACK-8926:
--
Affects Version/s: 4.5.2

> blacklisting particular IPv6 addresses is not possible
> --
>
> Key: CLOUDSTACK-8926
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8926
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, Network Controller
>Affects Versions: 4.5.1, 4.5.2
>Reporter: Stephan Seitz
>Priority: Minor
> Fix For: Future
>
>
> When offering IPv6 in shared networks, there are use-cases to "blacklist" 
> particular IPv6 addresses from being offered by the VirtualRouter's dhcp6.
> It could be mitigated by narrowiing the start/end-addresses, IF the 
> particular addresses are relatively close.
> Anyway, a table or structure for "externally used" addresses not to be 
> provided by the VR could really be handy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8924) [Blocker] test duplicated in test_scale_vm.py

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936824#comment-14936824
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8924:


GitHub user sanju1010 opened a pull request:

https://github.com/apache/cloudstack/pull/902

CLOUDSTACK-8924: Enable dynamic scaling to run test_scale_vm.py test on 
simulator

Simulator setup uses the config file from following location:
tools/marvin/marvin/config/setup.cfg
Added global setting parameter "enable.dynamic.scale.vm" to above config 
file, so that dynamic scale vm tests can be run on simulator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sanju1010/cloudstack simulator

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/902.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #902


commit e81bf937da2f21d0b29b7fc16c126f76a0c85aff
Author: sanjeev 
Date:   2015-09-30T13:18:27Z

CLOUDSTACK-8924: Enable dynamic scaling to run test_scale_vm.py test on 
Simulator




> [Blocker] test duplicated in test_scale_vm.py
> -
>
> Key: CLOUDSTACK-8924
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8924
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Raja Pullela
>Priority: Blocker
> Fix For: 4.6.0
>
>
> This is a blocker because BVTs for XS and Simulator are failing.
> Simulator zone - it is failing because
> This is a genuine failure - because the setup didn't have Dynamic Scaling 
> enabled as part of global settings.  Once it is enabled the tests ran fine.
> XS basic/Advzone - it is failing because
> the methods 
> test_01_scale_vm(self):
> test_02_scale_vm_without_hypervisor_specifics(self):
> are essentially same with the exception of tags -
> first one - test_01_scale_vm - had a "required_hardware=true"
> second - test_02_scale_vm_without_hypervisor_specific had a 
> "required_hardware=false"
> essentially we can get this test run on both Simulator and XenServer by 
> modifying the "required_hardware=false". 
> and test_02_scale_vm_without_hypervisor_specifc - can be deleted.
> The reason for failure on the XS is due to the following - "Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering"
> Following are the logs:
> Test scale virtual machine ... === TestName: test_01_scale_vm | Status : 
> SUCCESS ===
> ok
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm) ... === TestName: 
> test_02_scale_vm_without_hypervisor_specifics | Status : EXCEPTION ===
> ERROR
> ==
> ERROR: test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm)
> --
> Traceback (most recent call last):
>   File "/root/cloudstack/test/integration/smoke/test_scale_vm.py", line 234, 
> in test_02_scale_vm_without_hypervisor_specifics
> self.apiclient.scaleVirtualMachine(cmd)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackAPI/cloudstackAPIClient.py",
>  line 797, in scaleVirtualMachine
> response = self.connection.marvinRequest(command, response_type=response, 
> method=method)
>   File 
> "/usr/local/lib/python2.7/dist-packages/marvin/cloudstackConnection.py", line 
> 379, in marvinRequest
> raise e
> Exception: Job failed: {jobprocstatus : 0, created : 
> u'2015-09-30T01:16:45+', cmd : 
> u'org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin', userid : 
> u'd46c0476-670a-11e5-8245-96e5a2a4ae9a', jobstatus : 2, jobid : 
> u'ad32dee5-da3c-42c3-bdc3-35928b47697f', jobresultcode : 530, jobresulttype : 
> u'object', jobresult : {errorcode : 431, errortext : u'Not upgrading vm 
> VM[User|i-23-28-VM] since it already has the requested service offering 
> (BigInstance)'}, accountid : u'd46bf47c-670a-11e5-8245-96e5a2a4ae9a'}
>  >> begin captured stdout << -
> === TestName: test_02_scale_vm_without_hypervisor_specifics | Status : 
> EXCEPTION ===
> - >> end captured stdout << --
>  >> begin captured logging << 
> test_02_scale_vm_without_hypervisor_specifics 
> (integration.smoke.test_scale_vm.TestScaleVm): DEBUG: STARTED : 
> TC: test_02_scale_vm_without_hypervisor_specifics :::
> test_02_scale_vm_without_hypervisor_specifics 
> 

[jira] [Updated] (CLOUDSTACK-8927) [VPC]Executing command in VR: /opt/cloud/bin/router_proxy.sh is failing whenever there is a configuration change in VR

2015-09-30 Thread manasaveloori (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

manasaveloori updated CLOUDSTACK-8927:
--
Attachment: management-server.site-site.gz
management-server.rar

> [VPC]Executing command in VR: /opt/cloud/bin/router_proxy.sh is failing 
> whenever there is a configuration change in VR
> --
>
> Key: CLOUDSTACK-8927
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8927
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.6.0
>Reporter: manasaveloori
>Priority: Blocker
> Fix For: 4.6.0
>
> Attachments: management-server.rar, management-server.site-site.gz
>
>
> Whenever there is a configuration change in VPC VR observing the connectivity 
> issues with VR.
> Case1:
> Created VPC and tier network with default allow.
> Now created a new ACL list and rules. Changed the ACL list for the tier 
> network.
> 2015-09-30 04:35:39,553 ERROR [c.c.u.s.SshHelper] 
> (DirectAgent-336:ctx-b9e5cdf1) SSH execution of command 
> /opt/cloud/bin/router_proxy.sh update_config.py 169.254.3.89 
> guest_network.json has an error status code in return. result output:
> 2015-09-30 04:35:39,554 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-336:ctx-b9e5cdf1) Processing ScriptConfigItem, executing 
> update_config.py guest_network.json took 21165ms
> 2015-09-30 04:35:39,554 WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-336:ctx-b9e5cdf1) Expected 1 answers while executing 
> SetupGuestNetworkCommand but received 2
> 2015-09-30 04:35:45,769 ERROR [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-94:ctx-56b18174 job-227/job-228 ctx-f92247d7) Failed to 
> start instance VM[DomainRouter|r-22-VM]
> com.cloud.utils.exception.ExecutionException: Unable to start 
> VM[DomainRouter|r-22-VM] due to error in finalizeStart, not retrying
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
> at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4732)
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
> at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at org.apache.cloudstack.managed.context.impl.Def
> Case2:
> Reboot VR with remote access VPN enabled on VPC VR:
> Created VPC ,enabled vpn and rebooted the VR.
> ERROR in logs:
> 2015-09-30 04:46:18,663 ERROR [c.c.u.s.SshHelper] 
> (DirectAgent-46:ctx-3c355a22) SSH execution of command 
> /opt/cloud/bin/router_proxy.sh update_config.py 169.254.0.95 
> vpn_user_list.json has an error status code in return. result output:
> 2015-09-30 04:46:18,664 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-46:ctx-3c355a22) Processing ScriptConfigItem, executing 
> update_config.py vpn_user_list.json took 21168ms
> 2015-09-30 04:46:18,664 WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-46:ctx-3c355a22) Expected 1 answers while executing 
> VpnUsersCfgCommand but received 2
> 015-09-30 04:46:24,821 ERROR [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-101:ctx-fecf4919 job-240/job-242 ctx-44fde71b) Failed to 
> start instance VM[DomainRouter|r-23-VM]
> com.cloud.utils.exception.ExecutionException: Unable to start 
> VM[DomainRouter|r-23-VM] due to error in finalizeStart, not retrying
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
> at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> 

[jira] [Created] (CLOUDSTACK-8927) [VPC]Executing command in VR: /opt/cloud/bin/router_proxy.sh is failing whenever there is a configuration change in VR

2015-09-30 Thread manasaveloori (JIRA)
manasaveloori created CLOUDSTACK-8927:
-

 Summary: [VPC]Executing command in VR: 
/opt/cloud/bin/router_proxy.sh is failing whenever there is a configuration 
change in VR
 Key: CLOUDSTACK-8927
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8927
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Network Controller
Affects Versions: 4.6.0
Reporter: manasaveloori
Priority: Blocker
 Fix For: 4.6.0
 Attachments: management-server.rar, management-server.site-site.gz

Whenever there is a configuration change in VPC VR observing the connectivity 
issues with VR.

Case1:
Created VPC and tier network with default allow.
Now created a new ACL list and rules. Changed the ACL list for the tier network.


2015-09-30 04:35:39,553 ERROR [c.c.u.s.SshHelper] 
(DirectAgent-336:ctx-b9e5cdf1) SSH execution of command 
/opt/cloud/bin/router_proxy.sh update_config.py 169.254.3.89 guest_network.json 
has an error status code in return. result output:
2015-09-30 04:35:39,554 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-336:ctx-b9e5cdf1) Processing ScriptConfigItem, executing 
update_config.py guest_network.json took 21165ms
2015-09-30 04:35:39,554 WARN  [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-336:ctx-b9e5cdf1) Expected 1 answers while executing 
SetupGuestNetworkCommand but received 2


2015-09-30 04:35:45,769 ERROR [c.c.v.VirtualMachineManagerImpl] 
(Work-Job-Executor-94:ctx-56b18174 job-227/job-228 ctx-f92247d7) Failed to 
start instance VM[DomainRouter|r-22-VM]
com.cloud.utils.exception.ExecutionException: Unable to start 
VM[DomainRouter|r-22-VM] due to error in finalizeStart, not retrying
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4732)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.Def

Case2:

Reboot VR with remote access VPN enabled on VPC VR:

Created VPC ,enabled vpn and rebooted the VR.
ERROR in logs:

2015-09-30 04:46:18,663 ERROR [c.c.u.s.SshHelper] (DirectAgent-46:ctx-3c355a22) 
SSH execution of command /opt/cloud/bin/router_proxy.sh update_config.py 
169.254.0.95 vpn_user_list.json has an error status code in return. result 
output:
2015-09-30 04:46:18,664 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-46:ctx-3c355a22) Processing ScriptConfigItem, executing 
update_config.py vpn_user_list.json took 21168ms
2015-09-30 04:46:18,664 WARN  [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-46:ctx-3c355a22) Expected 1 answers while executing 
VpnUsersCfgCommand but received 2




015-09-30 04:46:24,821 ERROR [c.c.v.VirtualMachineManagerImpl] 
(Work-Job-Executor-101:ctx-fecf4919 job-240/job-242 ctx-44fde71b) Failed to 
start instance VM[DomainRouter|r-23-VM]
com.cloud.utils.exception.ExecutionException: Unable to start 
VM[DomainRouter|r-23-VM] due to error in finalizeStart, not retrying
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at com.cloud.vm.VirtualMach


Case3:
Created site to site VPN
When trying to enable VPN on VPC A
Observed the following error:

2015-09-29 12:17:40,600 ERROR [c.c.u.s.SshHelper] 
(DirectAgent-392:ctx-d373204a) Timed out in waiting SSH execution result
2015-09-29 12:17:40,607 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-392:ctx-d373204a) Processing ScriptConfigItem, executing 
update_config.py site_2_site_vpn.json took 

[jira] [Updated] (CLOUDSTACK-8927) [VPC]Executing command in VR: /opt/cloud/bin/router_proxy.sh is failing whenever there is a configuration change in VR

2015-09-30 Thread manasaveloori (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

manasaveloori updated CLOUDSTACK-8927:
--
Description: 
Whenever there is a configuration change in VPC VR observing the connectivity 
issues with VR.

Case1:
Created VPC and tier network with default allow.
Now created a new ACL list and rules. Changed the ACL list for the tier network.


2015-09-30 04:35:39,553 ERROR [c.c.u.s.SshHelper] 
(DirectAgent-336:ctx-b9e5cdf1) SSH execution of command 
/opt/cloud/bin/router_proxy.sh update_config.py 169.254.3.89 guest_network.json 
has an error status code in return. result output:
2015-09-30 04:35:39,554 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-336:ctx-b9e5cdf1) Processing ScriptConfigItem, executing 
update_config.py guest_network.json took 21165ms
2015-09-30 04:35:39,554 WARN  [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-336:ctx-b9e5cdf1) Expected 1 answers while executing 
SetupGuestNetworkCommand but received 2


2015-09-30 04:35:45,769 ERROR [c.c.v.VirtualMachineManagerImpl] 
(Work-Job-Executor-94:ctx-56b18174 job-227/job-228 ctx-f92247d7) Failed to 
start instance VM[DomainRouter|r-22-VM]
com.cloud.utils.exception.ExecutionException: Unable to start 
VM[DomainRouter|r-22-VM] due to error in finalizeStart, not retrying
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4732)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.Def

Case2:

Reboot VR with remote access VPN enabled on VPC VR:

Created VPC ,enabled vpn and rebooted the VR.
ERROR in logs:

2015-09-30 04:46:18,663 ERROR [c.c.u.s.SshHelper] (DirectAgent-46:ctx-3c355a22) 
SSH execution of command /opt/cloud/bin/router_proxy.sh update_config.py 
169.254.0.95 vpn_user_list.json has an error status code in return. result 
output:
2015-09-30 04:46:18,664 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-46:ctx-3c355a22) Processing ScriptConfigItem, executing 
update_config.py vpn_user_list.json took 21168ms
2015-09-30 04:46:18,664 WARN  [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-46:ctx-3c355a22) Expected 1 answers while executing 
VpnUsersCfgCommand but received 2




015-09-30 04:46:24,821 ERROR [c.c.v.VirtualMachineManagerImpl] 
(Work-Job-Executor-101:ctx-fecf4919 job-240/job-242 ctx-44fde71b) Failed to 
start instance VM[DomainRouter|r-23-VM]
com.cloud.utils.exception.ExecutionException: Unable to start 
VM[DomainRouter|r-23-VM] due to error in finalizeStart, not retrying
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at com.cloud.vm.VirtualMach


Case3:
Created site to site VPN
Enable vpn on VPCA and then enabling VPN on VPCB is throwing following error:
Observed the following error:

2015-09-29 12:17:40,600 ERROR [c.c.u.s.SshHelper] 
(DirectAgent-392:ctx-d373204a) Timed out in waiting SSH execution result
2015-09-29 12:17:40,607 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-392:ctx-d373204a) Processing ScriptConfigItem, executing 
update_config.py site_2_site_vpn.json took 120148ms
2015-09-29 12:17:40,607 WARN  [c.c.a.r.v.VirtualRoutingResource] 
(DirectAgent-392:ctx-d373204a) Expected 1 answers while executing 
Site2SiteVpnCfgCommand but received 2
2015-09-29 12:17:40,607 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-392:ctx-d373204a) Seq 4-50384020831211212: Response Received:
2015-09-29 12:17:40,608 DEBUG [c.c.a.t.Request] (DirectAgent-392:ctx-d373204a) 
Seq 4-50384020831211212: Processing:  { Ans: , 

[jira] [Commented] (CLOUDSTACK-8923) Create storage network IP range failed, Unknown parameters : zoneid

2015-09-30 Thread Nux (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936931#comment-14936931
 ] 

Nux commented on CLOUDSTACK-8923:
-

Right, tried a couple more times today both from UI and cloudmonkey and same 
thing. Everything can be added, except this bloody IP range for secondary 
storage.
Same errors as above.

> Create storage network IP range failed, Unknown parameters : zoneid
> ---
>
> Key: CLOUDSTACK-8923
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8923
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs and MGMT
>Reporter: Nux
>Priority: Blocker
>
> I am installing ACS from today's master (3ded3e9 
> http://tmp.nux.ro/acs460snap/ ). 
> Adding an initial zone via the web UI wizard fails at the secondary storage 
> setup stage:
> 2015-09-29 14:07:40,319 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Add job-27 into job monitoring
> 2015-09-29 14:07:40,322 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-314bbaae ctx-2db63923) ===END===  85.13.192.198 -- GET  
> command=createStorageNetworkIpRange=json=192.168.200.67=255.255.255.0=123=192.168.200.200=192.168.200.222=2f0efdcf-adf6-4373-858e-87de6af4cc08=eb7814d2-9a22-4ca4-93af-4a6b8abac67c&_=1443532060283
> 2015-09-29 14:07:40,327 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Executing AsyncJobVO {id:27, 
> userId: 2, accountId: 2, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd,
>  cmdInfo: {"response":"json","ctxDetails":"{\"interface 
> com.cloud.dc.Pod\":\"eb7814d2-9a22-4ca4-93af-4a6b8abac67c\"}","cmdEventType":"STORAGE.IP.RANGE.CREATE","ctxUserId":"2","gateway":"192.168.200.67","podid":"eb7814d2-9a22-4ca4-93af-4a6b8abac67c","zoneid":"2f0efdcf-adf6-4373-858e-87de6af4cc08","startip":"192.168.200.200","vlan":"123","httpmethod":"GET","_":"1443532060283","ctxAccountId":"2","ctxStartEventId":"68","netmask":"255.255.255.0","endip":"192.168.200.222"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 266785867798693, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2015-09-29 14:07:40,330 WARN  [c.c.a.d.ParamGenericValidationWorker] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Received unknown 
> parameters for command createStorageNetworkIpRange. Unknown parameters : 
> zoneid
> 2015-09-29 14:07:40,391 WARN  [o.a.c.a.c.a.n.CreateStorageNetworkIpRangeCmd] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27 ctx-1fa03c4a) Create storage network 
> IP range failed
> com.cloud.utils.exception.CloudRuntimeException: Unable to commit or close 
> the connection. 
>   at 
> com.cloud.utils.db.TransactionLegacy.commit(TransactionLegacy.java:730)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:46)
>   at 
> com.cloud.network.StorageNetworkManagerImpl.createIpRange(StorageNetworkManagerImpl.java:229)
>   at 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd.execute(CreateStorageNetworkIpRangeCmd.java:118)
>   at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
>   at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: Connection is closed.
>   at 
> 

[jira] [Commented] (CLOUDSTACK-8923) Create storage network IP range failed, Unknown parameters : zoneid

2015-09-30 Thread Remi Bergsma (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936942#comment-14936942
 ] 

Remi Bergsma commented on CLOUDSTACK-8923:
--

[~nuxro] Thanks for the feedback, I was now able to reproduce the issue. With 
or without the vlan specified.

WARN  [o.a.c.a.c.a.n.CreateStorageNetworkIpRangeCmd] 
(API-Job-Executor-41:ctx-6548cd93 job-719 ctx-8edc48f6) Create storage network 
IP range failed
com.cloud.utils.exception.CloudRuntimeException: Unable to commit or close the 
connection.
at 
com.cloud.utils.db.TransactionLegacy.commit(TransactionLegacy.java:730)
at com.cloud.utils.db.Transaction.execute(Transaction.java:46)
at 
com.cloud.network.StorageNetworkManagerImpl.createIpRange(StorageNetworkManagerImpl.java:229)
at 
org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd.execute(CreateStorageNetworkIpRangeCmd.java:118)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
at 
com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Connection is closed.
at 
org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.checkOpen(PoolingDataSource.java:185)
at 
org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.commit(PoolingDataSource.java:210)
at 
com.cloud.utils.db.TransactionLegacy.commit(TransactionLegacy.java:722)
... 17 more
ERROR [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-41:ctx-6548cd93 
job-719) Unexpected exception
com.cloud.utils.exception.CloudRuntimeException: DB Exception on: null

It looks like a SQL problem
...
Caused by: java.sql.SQLException: Connection is closed.
...

[~rajanik] Can you have a look please?

> Create storage network IP range failed, Unknown parameters : zoneid
> ---
>
> Key: CLOUDSTACK-8923
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8923
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs and MGMT
>Reporter: Nux
>Priority: Blocker
>
> I am installing ACS from today's master (3ded3e9 
> http://tmp.nux.ro/acs460snap/ ). 
> Adding an initial zone via the web UI wizard fails at the secondary storage 
> setup stage:
> 2015-09-29 14:07:40,319 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Add job-27 into job monitoring
> 2015-09-29 14:07:40,322 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-314bbaae ctx-2db63923) ===END===  85.13.192.198 -- GET  
> command=createStorageNetworkIpRange=json=192.168.200.67=255.255.255.0=123=192.168.200.200=192.168.200.222=2f0efdcf-adf6-4373-858e-87de6af4cc08=eb7814d2-9a22-4ca4-93af-4a6b8abac67c&_=1443532060283
> 2015-09-29 14:07:40,327 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-25:ctx-73c1ad88 job-27) Executing AsyncJobVO {id:27, 
> userId: 2, accountId: 2, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.admin.network.CreateStorageNetworkIpRangeCmd,
>  cmdInfo: {"response":"json","ctxDetails":"{\"interface 
> 

[jira] [Commented] (CLOUDSTACK-8848) Unexpected VR reboot after out-of-band migration

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936895#comment-14936895
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8848:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/885#issuecomment-144430354
  
@resmo I tested a series of BVT tests (not all of them) and the result is 
fine.

First test run:
```
marvinCfg=/data/shared/marvin/mct-zone1-kvm1.cfg
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_vpc_router_nics.py
```

Results:
```
Create a redundant vpc with two networks with two vms in each network ... 
=== TestName: test_01a_create_redundant_VPC | Status : FAILED ===
FAIL
```
This failure is expected, it is due to CLOUDSTACK-8915 and not an issue in 
this PR

```
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
```

The second run:
```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=false \
smoke/test_routers.py \
smoke/test_network_acl.py \
smoke/test_privategw_acl.py \
smoke/test_reset_vm_on_reboot.py \
smoke/test_vm_life_cycle.py \
smoke/test_vpc_vpn.py \
smoke/test_service_offerings.py \
component/test_vpc_offerings.py \
component/test_vpc_routers.py
```

Results:
```
Test router internal advanced zone ... === TestName: 
test_02_router_internal_adv | Status : SUCCESS ===
ok
Test restart network ... === TestName: test_03_restart_network_cleanup | 
Status : SUCCESS ===
ok
Test router basic setup ... === TestName: test_05_router_basic | Status : 
SUCCESS ===
ok
Test router advanced setup ... === TestName: test_06_router_advanced | 
Status : SUCCESS ===
ok
Test stop router ... === TestName: test_07_stop_router | Status : SUCCESS 
===
ok
Test start router ... === TestName: test_08_start_router | Status : SUCCESS 
===
ok
Test reboot router ... === TestName: test_09_reboot_router | Status : 
SUCCESS ===
ok
test_privategw_acl (integration.smoke.test_privategw_acl.TestPrivateGwACL) 
... === TestName: test_privategw_acl | Status : SUCCESS ===
ok
Test reset virtual machine on reboot ... === TestName: 
test_01_reset_vm_on_reboot | Status : SUCCESS ===
ok
Test advanced zone virtual router ... === TestName: 
test_advZoneVirtualRouter | Status : SUCCESS ===
ok
Test Deploy Virtual Machine ... === TestName: test_deploy_vm | Status : 
SUCCESS ===
ok
Test Multiple Deploy Virtual Machine ... === TestName: 
test_deploy_vm_multiple | Status : SUCCESS ===
ok
Test Stop Virtual Machine ... === TestName: test_01_stop_vm | Status : 
SUCCESS ===
ok
Test Start Virtual Machine ... === TestName: test_02_start_vm | Status : 
SUCCESS ===
ok
Test Reboot Virtual Machine ... === TestName: test_03_reboot_vm | Status : 
SUCCESS ===
ok
Test destroy Virtual Machine ... === TestName: test_06_destroy_vm | Status 
: SUCCESS ===
ok
Test recover Virtual Machine ... === TestName: test_07_restore_vm | Status 
: SUCCESS ===
ok
Test migrate VM ... === TestName: test_08_migrate_vm | Status : SUCCESS ===
ok
Test destroy(expunge) Virtual Machine ... === TestName: test_09_expunge_vm 
| Status : SUCCESS ===
ok
Test VPN in VPC ... === TestName: test_vpc_remote_access_vpn | Status : 
SUCCESS ===
ok
Test VPN in VPC ... === TestName: test_vpc_site2site_vpn | Status : SUCCESS 
===
ok
Test to create service offering ... === TestName: 
test_01_create_service_offering | Status : SUCCESS ===
ok
Test to update existing service offering ... === TestName: 
test_02_edit_service_offering | Status : SUCCESS ===
ok
Test to delete service offering ... === TestName: 
test_03_delete_service_offering | Status : SUCCESS ===
ok
Test create VPC offering ... === TestName: test_01_create_vpc_offering | 
Status : SUCCESS ===
ok
Test VPC offering without load balancing service ... === TestName: 
test_03_vpc_off_without_lb | Status : SUCCESS ===
ok
Test VPC offering without static NAT service ... === TestName: 
test_04_vpc_off_without_static_nat | Status : SUCCESS ===
ok
Test VPC offering without port forwarding service ... === TestName: 
test_05_vpc_off_without_pf | Status : SUCCESS ===
ok
Test VPC offering with invalid services ... === TestName: 
test_06_vpc_off_invalid_services | Status : 

[jira] [Commented] (CLOUDSTACK-8808) Successfully registered VHD template is downloaded again due to missing virtualsize property in template.properties

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14936957#comment-14936957
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8808:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/901#issuecomment-144439566
  
Thanks @karuturi for picking this up. Will have a look soon!


> Successfully registered VHD template is downloaded again due to missing 
> virtualsize property in template.properties
> ---
>
> Key: CLOUDSTACK-8808
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8808
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.4.4, 4.6.0
> Environment: Seen on NFS as sec storage
>Reporter: Remi Bergsma
>Assignee: Rajani Karuturi
>Priority: Blocker
>
> We noticed all of our templates are downloaded again as soon as we restart 
> SSVM, its Cloud service or the management server it connects to.
> A scan done by the SSVM (listvmtmplt.sh) returns the template, but it is 
> rejected later (Post download installation was not completed) because (Format 
> is invalid) due to missing virtualSize property in template.properties.
> The initial registration did succeed however. I'd either want the 
> registration to fail, or it to succeed. Not first succeed (and spin VMs 
> without a problem) then fail unexpectedly later.
> This is the script processing the download:
> services/secondary-storage/server/src/org/apache/cloudstack/storage/template/DownloadManagerImpl.java
>  759 private List listTemplates(String rootdir) { 
> 
>  760 List result = new ArrayList();   
> 
>  761  
> 
>  762 Script script = new Script(listTmpltScr, s_logger);  
> 
>  763 script.add("-r", rootdir);   
> For example this becomes:
> ==> /usr/local/cloud/systemvm/scripts/storage/secondary/listvmtmplt.sh -r 
> /mnt/SecStorage/ee8633dd-5dbd-39a3-b3ea-801ca0a20da0
> In this log file, it processes the output:
> less /var/log/cloud/cloud.out
> 2015-09-04 08:39:54,622 WARN  [storage.template.DownloadManagerImpl] 
> (agentRequest-Handler-1:null) Post download installation was not completed 
> for /mnt/SecStorage/ee8633dd-5dbd-39a3-b3ea-801ca0a20da0/template/tmpl/2/1607
> This error message is generated here:
> services/secondary-storage/server/src/org/apache/cloudstack/storage/template/DownloadManagerImpl.java
>  
> 780 List publicTmplts = listTemplates(templateDir);   
>
>  781 for (String tmplt : publicTmplts) {  
> 
>  782 String path = tmplt.substring(0, 
> tmplt.lastIndexOf(File.separator)); 
>  783 TemplateLocation loc = new TemplateLocation(_storage, path); 
> 
>  784 try {
> 
>  785 if (!loc.load()) {   
> 
>  786 s_logger.warn("Post download installation was not 
> completed for " + path);
>  787 // loc.purge();  
> 
>  788 _storage.cleanup(path, templateDir); 
> 
>  789 continue;
> 
>  790 }
> 
>  791 } catch (IOException e) {
> 
>  792 s_logger.warn("Unable to load template location " + 
> path, e);
>  793 continue;
> 
>  794 } 
> In the logs this message is also seen:
> MCCP-ADMIN-1|s-32436-VM CLOUDSTACK: 10:09:17,333  WARN TemplateLocation:196 - 
> Format is invalid 
> It is generated here:
> .//core/src/com/cloud/storage/template/TemplateLocation.java
> 192public boolean addFormat(FormatInfo newInfo) { 
>   
> 193 deleteFormat(newInfo.format); 
>
> 194   
>
> 195 if (!checkFormatValidity(newInfo)) {  
>
> 196 s_logger.warn("Format is invalid ");  
>
> 197 return false; 
>   

[jira] [Commented] (CLOUDSTACK-8848) Unexpected VR reboot after out-of-band migration

2015-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14937002#comment-14937002
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8848:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/885#issuecomment-16693
  
The functionality of this PR also seems to work:

```
WARN  [c.c.v.VirtualMachinePowerStateSyncImpl] 
(AgentManager-Handler-5:null) Detected missing VM but power state is outdated, 
wait for another process report run for VM id: 93
WARN  [c.c.v.VirtualMachinePowerStateSyncImpl] 
(AgentManager-Handler-5:null) Detected missing VM but power state is outdated, 
wait for another process report run for VM id: 94

INFO  [c.c.v.VirtualMachineManagerImpl] (AgentManager-Handler-2:null) VM 
r-93-VM is at Running and we received a power-off report while there is no 
pending jobs on it
INFO  [c.c.v.VirtualMachineManagerImpl] (AgentManager-Handler-2:null) 
Detected out-of-band stop of a HA enabled VM r-93-VM, will schedule restart

INFO  [c.c.v.VirtualMachineManagerImpl] (AgentManager-Handler-2:null) VM 
i-2-94-VM is at Running and we received a power-off report while there is no 
pending jobs on it
INFO  [c.c.v.VirtualMachineManagerImpl] (AgentManager-Handler-2:null) VM 
i-2-94-VM is sync-ed to at Stopped state according to power-off report from 
hypervisor

```

I tested it with a router and an instance. The router was restarted, the 
instance was sync'ed to stopped (it was non-HA).

This is a HA-enabled instance:

```
INFO  [c.c.v.VirtualMachineManagerImpl] (AgentManager-Handler-6:null) There 
is pending job or HA tasks working on the VM. vm id: 93, postpone power-change 
report by resetting power-change counters

WARN  [c.c.v.VirtualMachinePowerStateSyncImpl] 
(AgentManager-Handler-1:null) Detected missing VM but power state is outdated, 
wait for another process report run for VM id: 96
INFO  [c.c.v.VirtualMachineManagerImpl] (AgentManager-Handler-13:null) VM 
i-2-96-VM is at Running and we received a power-off report while there is no 
pending jobs on it
INFO  [c.c.v.VirtualMachineManagerImpl] (AgentManager-Handler-13:null) 
Detected out-of-band stop of a HA enabled VM i-2-96-VM, will schedule restart
INFO  [c.c.h.HighAvailabilityManagerImpl] (AgentManager-Handler-13:null) 
Schedule vm for HA:  VM[User|i-2-96-VM]
INFO  [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-3c02c86b work-4) 
Processing work HAWork[4-HA-96-Running-Investigating]
INFO  [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-3c02c86b work-4) 
HA on VM[User|i-2-96-VM]
INFO  [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-3c02c86b work-4) 
SimpleInvestigator found VM[User|i-2-96-VM] to be alive? false

INFO  [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-3c02c86b work-4) 
VM is now restarted: 96 on 4
INFO  [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-3c02c86b work-4) 
Completed work HAWork[4-HA-96-Running-Scheduled]
```

This gets started again as expected, so that also works fine. The only 
thing I notice, and that is to be expected, that is now takes longer before HA 
kicks in and recovery starts.



> Unexpected VR reboot after out-of-band migration
> 
>
> Key: CLOUDSTACK-8848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.5.2, 4.6.0
>Reporter: René Moser
>Assignee: René Moser
>Priority: Blocker
> Fix For: 4.5.3, 4.6.0
>
>
> In some conditions (race condition), VR gets rebooted after a out of band 
> migration was done on vCenter. 
> {panel:bgColor=#CE}
> Note, new global setting in 4.5.2 "VR reboot after out of band migration" is 
> set to *false* and this looks more like a bug.
> {panel}
> After a VR migration to a host _and_ when the VM power state report gathering 
> is running, the VR (and also any user VM as well) will get into the 
> "PowerReportMissing".
> If the VM is a VR. it will be powered off and started again on vCenter. That 
> is what we see. In can not be reproduced every time a migration was done, but 
> it seems the problem is related to "powerReportMissing".
> I grep-ed the source and found this line related
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java#L3616
> and also it seems that the graceful period might be also related, 
> https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachinePowerStateSyncImpl.java#L110
> In case it is a