[jira] [Commented] (CLOUDSTACK-9438) Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607528#comment-15607528
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9438:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1615
  
ok. Thank you. 


> Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI
> ---
>
> Key: CLOUDSTACK-9438
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9438
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Introduction
> From [9252|https://issues.apache.org/jira/browse/CLOUDSTACK-9252] it was 
> possible to configure NFS version for secondary storage mount. 
> However, changing NFS version requires inserting an new detail on 
> {{image_store_details}} table, with {{name = 'nfs.version'}} and {{value = 
> X}} where X is desired NFS version, and then restarting management server for 
> changes to take effect.
> Our improvement aims to make NFS version changeable from UI, instead of 
> previously described workflow.
> h3. Proposed solution
> Basically, NFS version is defined as an image store ConfigKey, this implied:
> * Adding a new Config scope: *ImageStore*
> * Make {{ImageStoreDetailsDao}} class to extend {{ResourceDetailsDaoBase}} 
> and {{ImageStoreDetailVO}} implement {{ResourceDetail}}
> * Insert {{'display'}} column on {{image_store_details}} table
> * Extending {{ListCfgsCmd}} and {{UpdateCfgCmd}} to support *ImageStore* 
> scope, which implied:
> ** Injecting {{ImageStoreDetailsDao}} and {{ImageStoreDao}} on 
> {{ConfigurationManagerImpl}} class, on {{cloud-server}} module.
> h4. Important
> It is important to mention that {{ImageStoreDaoImpl}} and 
> {{ImageStoreDetailsDaoImpl}} classes were moved from {{cloud-engine-storage}} 
> to {{cloud-engine-schema}} module in order to Spring find those beans to 
> inject on {{ConfigurationManagerImpl}} in {{cloud-server}} module.
> We had this maven dependencies between modules:
> * {{cloud-server --> cloud-engine-schema}}
> * {{cloud-engine-storage --> cloud-secondary-storage --> cloud-server}}
> As {{ImageStoreDaoImpl}} and {{ImageStoreDetailsDaoImpl}} were defined in 
> {{cloud-engine-storage}}, and they needed in {{cloud-server}} module, to be 
> injected on {{ConfigurationManagerImpl}}, if we added dependency from 
> {{cloud-server}} to {{cloud-engine-storage}} we would introduce a dependency 
> cycle. To avoid this cycle, we moved those classes to {{cloud-engine-schema}} 
> module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9566) instance-id metadata for baremetal VM returns ID

2016-10-25 Thread sudharma jain (JIRA)
sudharma jain created CLOUDSTACK-9566:
-

 Summary: instance-id metadata for baremetal VM returns ID
 Key: CLOUDSTACK-9566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: sudharma jain


On Baremetal 

[root@ip-172-17-0-144 ~]# curl http://8.37.203.221/latest/meta-data/instance-id
6021

on Xen 

[root@ip-172-17-2-103 ~]# curl http://172.17.0.252/latest/meta-data/instance-id
cbeb517a-e833-4a0c-b1e8-9ed70200fbbf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9561) Unable to delete domain/Account

2016-10-25 Thread sudharma jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sudharma jain updated CLOUDSTACK-9561:
--
Summary: Unable to delete domain/Account  (was: Unable to delete domain)

> Unable to delete domain/Account
> ---
>
> Key: CLOUDSTACK-9561
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9561
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>
> While deleting the UserAccount Cleanup for the removed VMs/volumes are not 
> happening. For the removed VMs, snapshots doesn't get cleaned. Only for 
> volumes in ready state the cleanup happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9550) Metrics view does not filter items based on zone/cluster/host it is in

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607480#comment-15607480
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9550:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1712
  
This should be good to merge, purely UI change? /cc @murali-reddy 
@abhinandanprateek @karuturi 


> Metrics view does not filter items based on zone/cluster/host it is in
> --
>
> Key: CLOUDSTACK-9550
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9550
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.2.0, 4.8.2.0
>
>
> Go to zone -> compute and storage tab -> browse any of the resources under it 
> which have metrics view such as clusters, hosts, storage pool etc. The 
> metrics view on those resources don't filter items based on the 
> zone/cluster/host it is in, and ends up showing all the 
> cluster/host/storage-pool items.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9534) Allow users to destroy VR when in running state

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607478#comment-15607478
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9534:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1701
  
Is this good to merge, purely UI change? /cc @murali-reddy 
@abhinandanprateek @karuturi 


> Allow users to destroy VR when in running state
> ---
>
> Key: CLOUDSTACK-9534
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9534
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.2.0, 4.8.2.0
>
>
> When VR is in running state, the destroyRouter api works via clis such as 
> cloudmonkey but UI does not show this option. This is useful to quickly get 
> rid of a VR, without having to stop it (cleanly) first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607469#comment-15607469
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9560:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
Code LGTM


> Root volume of deleted VM left unremoved
> 
>
> Key: CLOUDSTACK-9560
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: XenServer
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> In the following scenario root volume gets unremoved
> Steps to reproduce the issue
> 1. Create a VM.
> 2. Stop this VM.
> 3. On the page of the volume of the VM, click 'Download Volume' icon.
> 4. Wait for the popup screen to display and cancel out with/without clicking 
> the download link.
> 5. Destroy the VM
> Even after the corresponding VM is deleted,expunged, the root-volume is left 
> in 'Expunging' state unremoved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9565) Fix intermittent failure in oobm test test_oobm_zchange_password

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607445#comment-15607445
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9565:


GitHub user rhtyd opened a pull request:

https://github.com/apache/cloudstack/pull/1731

CLOUDSTACK-9565: Fix intermittent failure in test_oobm_zchange_password

Fixes intermittent integration smoke test failures caused in
test_oobm_zchange_password test.

The scope is limited to the integration test only, and full integration 
test suite is not necessary. We can only consider code reviews and merge on 
basis of Travis results.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack oobm-changepasswd-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1731.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1731


commit 29844a3ec94e1dc40da9c6ac90d5345b89e82360
Author: Rohit Yadav 
Date:   2016-10-26T05:01:35Z

CLOUDSTACK-9565: Fix intermittent failure in test_oobm_zchange_password

Fixes intermittent integration smoke test failures caused in
test_oobm_zchange_password test.

Signed-off-by: Rohit Yadav 




> Fix intermittent failure in oobm test test_oobm_zchange_password
> 
>
> Key: CLOUDSTACK-9565
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9565
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Integration smoke test around oobm feature fails intermittenly for 
> test_oobm_zchange_password test, due to a time-dependent code that checks for 
> list of events/alerts. Fix the issue by getting the list early/before doing 
> password related changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9565) Fix intermittent failure in oobm test test_oobm_zchange_password

2016-10-25 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-9565:
---

 Summary: Fix intermittent failure in oobm test 
test_oobm_zchange_password
 Key: CLOUDSTACK-9565
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9565
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
Assignee: Rohit Yadav


Integration smoke test around oobm feature fails intermittenly for 
test_oobm_zchange_password test, due to a time-dependent code that checks for 
list of events/alerts. Fix the issue by getting the list early/before doing 
password related changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9565) Fix intermittent failure in oobm test test_oobm_zchange_password

2016-10-25 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-9565:

Status: Reviewable  (was: In Progress)

> Fix intermittent failure in oobm test test_oobm_zchange_password
> 
>
> Key: CLOUDSTACK-9565
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9565
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Integration smoke test around oobm feature fails intermittenly for 
> test_oobm_zchange_password test, due to a time-dependent code that checks for 
> list of events/alerts. Fix the issue by getting the list early/before doing 
> password related changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9504) Fully support system VMs on managed storage

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607438#comment-15607438
 ] 

ASF subversion and git services commented on CLOUDSTACK-9504:
-

Commit 12a062585212b2bcc32afbb43e0b30f7cbba72a4 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=12a0625 ]

Merge pull request #1642 from mike-tutkowski/managed_system_vms

CLOUDSTACK-9504: System VMs on Managed StorageThis PR makes it easier to spin 
up system VMs on managed storage.

Managed storage is when you have a dedicated volume on a SAN for a particular 
virtual disk (making it easier to deliver QoS).

For example, with this PR, you'd likely have a single virtual disk for a system 
VM. On XenServer, that virtual disk resides by itself in a storage repository 
(no other virtual disks share this storage repository).

It was possible in the past to spin up system VMs that used managed storage, 
but this PR facilitates the use case by making changes to the System Service 
Offering dialog (and by putting in some parameter checks in the management 
server).

JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9504

* pr/1642:
  Added support for system VMs to make use of managed storage

Signed-off-by: Rajani Karuturi 


> Fully support system VMs on managed storage
> ---
>
> Key: CLOUDSTACK-9504
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9504
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: All
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.10.0.0
>
>
> There are three related items for this ticket:
> 1) Do not permit "custom IOPS" as a parameter when creating a system offering 
> (throw an exception if this parameter is passed in).
> 2) If you transition managed storage into maintenance mode and system VMs 
> were running on that managed storage, the host-side clustered file systems 
> (SRs on XenServer) are not removed. Remove them.
> 3) Add integration tests for system VMs with managed storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9504) Fully support system VMs on managed storage

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607437#comment-15607437
 ] 

ASF subversion and git services commented on CLOUDSTACK-9504:
-

Commit 12a062585212b2bcc32afbb43e0b30f7cbba72a4 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=12a0625 ]

Merge pull request #1642 from mike-tutkowski/managed_system_vms

CLOUDSTACK-9504: System VMs on Managed StorageThis PR makes it easier to spin 
up system VMs on managed storage.

Managed storage is when you have a dedicated volume on a SAN for a particular 
virtual disk (making it easier to deliver QoS).

For example, with this PR, you'd likely have a single virtual disk for a system 
VM. On XenServer, that virtual disk resides by itself in a storage repository 
(no other virtual disks share this storage repository).

It was possible in the past to spin up system VMs that used managed storage, 
but this PR facilitates the use case by making changes to the System Service 
Offering dialog (and by putting in some parameter checks in the management 
server).

JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9504

* pr/1642:
  Added support for system VMs to make use of managed storage

Signed-off-by: Rajani Karuturi 


> Fully support system VMs on managed storage
> ---
>
> Key: CLOUDSTACK-9504
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9504
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: All
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.10.0.0
>
>
> There are three related items for this ticket:
> 1) Do not permit "custom IOPS" as a parameter when creating a system offering 
> (throw an exception if this parameter is passed in).
> 2) If you transition managed storage into maintenance mode and system VMs 
> were running on that managed storage, the host-side clustered file systems 
> (SRs on XenServer) are not removed. Remove them.
> 3) Add integration tests for system VMs with managed storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9555) when a template is deleted and then copied over again , it is still marked as Removed in template_zone_ref table

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607450#comment-15607450
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9555:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1716
  
@yvsubhash Can you check why travis failed? (force push to re run it if its 
a false alarm)


> when a template is deleted and then copied over again , it is still marked as 
> Removed in template_zone_ref table
> 
>
> Key: CLOUDSTACK-9555
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9555
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Affects Versions: 4.9.0
> Environment: All Hypervisors
>Reporter: subhash yedugundla
> Fix For: 4.10.0.0
>
>
> Charging of template stops in the following use case
> Step1:Register a Template(Name:A) to Zone1
> Step2:Copy the template(Name:A) to Zone 2
> Step3:Delete the template(Name:A) of only Zone2
> Step4:Copy the template(Name:A) to Zone 2
> Step5:Check the charging of the template



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9504) Fully support system VMs on managed storage

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607440#comment-15607440
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9504:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1642


> Fully support system VMs on managed storage
> ---
>
> Key: CLOUDSTACK-9504
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9504
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: All
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.10.0.0
>
>
> There are three related items for this ticket:
> 1) Do not permit "custom IOPS" as a parameter when creating a system offering 
> (throw an exception if this parameter is passed in).
> 2) If you transition managed storage into maintenance mode and system VMs 
> were running on that managed storage, the host-side clustered file systems 
> (SRs on XenServer) are not removed. Remove them.
> 3) Add integration tests for system VMs with managed storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9438) Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607447#comment-15607447
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9438:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1615
  
@karuturi tests are about to finish


> Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI
> ---
>
> Key: CLOUDSTACK-9438
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9438
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Introduction
> From [9252|https://issues.apache.org/jira/browse/CLOUDSTACK-9252] it was 
> possible to configure NFS version for secondary storage mount. 
> However, changing NFS version requires inserting an new detail on 
> {{image_store_details}} table, with {{name = 'nfs.version'}} and {{value = 
> X}} where X is desired NFS version, and then restarting management server for 
> changes to take effect.
> Our improvement aims to make NFS version changeable from UI, instead of 
> previously described workflow.
> h3. Proposed solution
> Basically, NFS version is defined as an image store ConfigKey, this implied:
> * Adding a new Config scope: *ImageStore*
> * Make {{ImageStoreDetailsDao}} class to extend {{ResourceDetailsDaoBase}} 
> and {{ImageStoreDetailVO}} implement {{ResourceDetail}}
> * Insert {{'display'}} column on {{image_store_details}} table
> * Extending {{ListCfgsCmd}} and {{UpdateCfgCmd}} to support *ImageStore* 
> scope, which implied:
> ** Injecting {{ImageStoreDetailsDao}} and {{ImageStoreDao}} on 
> {{ConfigurationManagerImpl}} class, on {{cloud-server}} module.
> h4. Important
> It is important to mention that {{ImageStoreDaoImpl}} and 
> {{ImageStoreDetailsDaoImpl}} classes were moved from {{cloud-engine-storage}} 
> to {{cloud-engine-schema}} module in order to Spring find those beans to 
> inject on {{ConfigurationManagerImpl}} in {{cloud-server}} module.
> We had this maven dependencies between modules:
> * {{cloud-server --> cloud-engine-schema}}
> * {{cloud-engine-storage --> cloud-secondary-storage --> cloud-server}}
> As {{ImageStoreDaoImpl}} and {{ImageStoreDetailsDaoImpl}} were defined in 
> {{cloud-engine-storage}}, and they needed in {{cloud-server}} module, to be 
> injected on {{ConfigurationManagerImpl}}, if we added dependency from 
> {{cloud-server}} to {{cloud-engine-storage}} we would introduce a dependency 
> cycle. To avoid this cycle, we moved those classes to {{cloud-engine-schema}} 
> module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607399#comment-15607399
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has been 
kicked to run smoke tests


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607397#comment-15607397
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@blueorangutan test centos7 vmware-55u3


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607378#comment-15607378
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-95


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607320#comment-15607320
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607317#comment-15607317
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@blueorangutan package


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606912#comment-15606912
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
Trillian test result (tid-182)
Environment: vmware-60u2 (x2), Advanced Networking with Mgmt server 7
Total time taken: 26183 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1729-t182-vmware-60u2.zip
Test completed. 39 look ok, 9 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_vpc_site2site_vpn | `Failure` | 105.66 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | `Failure` | 110.63 | test_vpc_vpn.py
test_02_VPC_default_routes | `Failure` | 1136.76 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | `Failure` | 424.69 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 399.44 
| test_vpc_redundant.py
test_02_revert_vm_snapshots | `Failure` | 90.24 | test_vm_snapshots.py
test_04_rvpc_privategw_static_routes | `Failure` | 665.48 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 283.27 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 414.15 | 
test_privategw_acl.py
test_01_vpc_privategw_acl | `Failure` | 96.57 | test_privategw_acl.py
test_oobm_zchange_password | `Failure` | 20.41 | test_outofbandmanagement.py
test_04_rvpc_internallb_haproxy_stats_on_all_interfaces | `Failure` | 
211.17 | test_internal_lb.py
test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 | `Failure` | 100.87 | 
test_internal_lb.py
test_01_redundant_vpc_site2site_vpn | `Error` | 631.61 | test_vpc_vpn.py
ContextSuite context=TestRVPCSite2SiteVpn>:teardown | `Error` | 692.04 | 
test_vpc_vpn.py
test_01_VPC_nics_after_destroy | `Error` | 106.64 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | `Error` | 510.49 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | `Error` | 262.33 | 
test_vpc_redundant.py
test_01_test_vm_volume_snapshot | `Error` | 110.69 | test_vm_snapshots.py
test_01_create_vm_snapshots | `Error` | 161.57 | test_vm_snapshots.py
test_04_rvpc_privategw_static_routes | `Error` | 700.79 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Error` | 318.60 | 
test_privategw_acl.py
test_01_nic | `Error` | 147.30 | test_nic.py
test_reboot_router | `Error` | 292.29 | test_network.py
test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | `Error` | 
36.03 | test_network.py
test_network_rules_acquired_public_ip_1_static_nat_rule | `Error` | 88.89 | 
test_network.py
test_03_vpc_internallb_haproxy_stats_on_all_interfaces | `Error` | 246.92 | 
test_internal_lb.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | `Error` | 362.59 | 
test_internal_lb.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | `Error` | 377.95 | 
test_internal_lb.py
test_deployvm_firstfit | `Error` | 35.46 | 
test_deploy_vms_with_varied_deploymentplanners.py
test_04_rvpc_network_garbage_collector_nics | Success | 1527.93 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 636.27 | test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.68 | test_volumes.py
test_06_download_detached_volume | Success | 45.42 | test_volumes.py
test_05_detach_volume | Success | 100.21 | test_volumes.py
test_04_delete_attached_volume | Success | 10.16 | test_volumes.py
test_03_download_attached_volume | Success | 15.19 | test_volumes.py
test_02_attach_volume | Success | 48.60 | test_volumes.py
test_01_create_volume | Success | 434.85 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.16 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 207.20 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.01 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.83 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.20 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 50.67 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.10 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.09 | test_vm_life_cycle.py
test_02_start_vm | Success | 15.13 | test_vm_life_cycle.py
test_01_stop_vm | Success | 5.08 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 175.92 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py

[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606813#comment-15606813
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
Trillian test result (tid-181)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 6
Total time taken: 39964 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1729-t181-vmware-55u3.zip
Test completed. 32 look ok, 16 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_vpc_site2site_vpn | `Failure` | 293.36 | test_vpc_vpn.py
test_02_VPC_default_routes | `Failure` | 163.46 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | `Failure` | 1172.39 | 
test_vpc_router_nics.py
test_02_redundant_VPC_default_routes | `Failure` | 1385.81 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 218.35 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 468.24 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 117.64 | 
test_privategw_acl.py
test_04_rvpc_internallb_haproxy_stats_on_all_interfaces | `Failure` | 25.74 
| test_internal_lb.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | `Failure` | 1204.62 | 
test_internal_lb.py
test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 | `Failure` | 515.79 | 
test_internal_lb.py
test_01_redundant_vpc_site2site_vpn | `Error` | 780.17 | test_vpc_vpn.py
ContextSuite context=TestRVPCSite2SiteVpn>:teardown | `Error` | 916.49 | 
test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Error` | 1210.47 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Error` | 37.10 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
`Error` | 1457.18 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Error` | 208.62 | 
test_vpc_redundant.py
ContextSuite context=TestVmSnapshot>:setup | `Error` | 400.79 | 
test_vm_snapshots.py
test_deploy_vm_multiple | `Error` | 2545.00 | test_vm_life_cycle.py
ContextSuite context=TestVMLifeCycle>:setup | `Error` | 2911.25 | 
test_vm_life_cycle.py
ContextSuite context=TestSnapshotRootDisk>:teardown | `Error` | 82.19 | 
test_snapshots.py
test_09_reboot_router | `Error` | 5.17 | test_routers.py
test_08_start_router | `Error` | 2583.70 | test_routers.py
test_03_restart_network_cleanup | `Error` | 40.64 | test_routers.py
test_03_vpc_privategw_restart_vpc_cleanup | `Error` | 508.86 | 
test_privategw_acl.py
test_01_nic | `Error` | 224.38 | test_nic.py
test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | `Error` | 
36.37 | test_network.py
test_delete_account | `Error` | 203.59 | test_network.py
test_assign_and_removal_lb | `Error` | 0.13 | test_loadbalance.py
test_02_create_lb_rule_non_nat | `Error` | 92.09 | test_loadbalance.py
test_02_create_lb_rule_non_nat | `Error` | 97.20 | test_loadbalance.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_03_vpc_internallb_haproxy_stats_on_all_interfaces | `Error` | 192.27 | 
test_internal_lb.py
test_03_vpc_internallb_haproxy_stats_on_all_interfaces | `Error` | 197.39 | 
test_internal_lb.py
test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 | `Error` | 586.78 | 
test_internal_lb.py
test_deployvm_userdata_post | `Error` | 60.87 | 
test_deploy_vm_with_userdata.py
test_deployvm_userdata | `Error` | 171.92 | test_deploy_vm_with_userdata.py
test_deploy_vm_from_iso | `Error` | 72.66 | test_deploy_vm_iso.py
test_DeployVmAntiAffinityGroup | `Error` | 257.79 | test_affinity_groups.py
test_01_vpc_remote_access_vpn | Success | 207.35 | test_vpc_vpn.py
test_09_delete_detached_volume | Success | 36.11 | test_volumes.py
test_06_download_detached_volume | Success | 50.61 | test_volumes.py
test_05_detach_volume | Success | 110.33 | test_volumes.py
test_04_delete_attached_volume | Success | 15.27 | test_volumes.py
test_03_download_attached_volume | Success | 20.37 | test_volumes.py
test_02_attach_volume | Success | 53.82 | test_volumes.py
test_01_create_volume | Success | 518.04 | test_volumes.py
test_01_test_vm_volume_snapshot | Success | 171.83 | test_vm_snapshots.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 247.66 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.07 | 

[jira] [Commented] (CLOUDSTACK-8830) [VMware] VM snapshot fails for 12 min after instance creation

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606648#comment-15606648
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8830:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1677
  
@jburwell a Trillian-Jenkins test job (centos7 mgmt + vmware55u3) has been 
kicked to run smoke tests


> [VMware] VM snapshot fails for 12 min after instance creation
> -
>
> Key: CLOUDSTACK-8830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> ISSUE
> 
> [VMware] VM snapshot fails for 12 min after instance creation
> Environment
> ==
> Product Name: Cloudstack
> Hypervisor: VMWare VSphere 6
> VM DETAILS
> ==
> i-84987-16119-VM
> TROUBLESHOOTING
> ==
> I see that the following failure and immediate success result for the 
> CreateVMSnapshot call
> {noformat}
> 2015-07-24 08:20:55,363 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Sending  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,373 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Executing:  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,374 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:710814a5) Seq 80-6161487240196259878: 
> Executing request
> 2015-07-24 08:20:55,523 ERROR [c.c.h.v.m.VmwareStorageManagerImpl] 
> (DirectAgent-66:ctx-5fbdccd8 ussfoldcsesx112.adslab.local, 
> job-64835/job-64836, cmd: CreateVMSnapshotCommand) (logid:8b87ab8a) failed to 
> create snapshot for vm:i-84987-16119-VM due to null
> 2015-07-24 08:20:55,524 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:8b87ab8a) Seq 80-6161487240196259878: 
> Response Received: 
> 2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] (DirectAgent-66:ctx-5fbdccd8) 
> 

[jira] [Commented] (CLOUDSTACK-8830) [VMware] VM snapshot fails for 12 min after instance creation

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606647#comment-15606647
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8830:


Github user jburwell commented on the issue:

https://github.com/apache/cloudstack/pull/1677
  
@blueorangutan test centos7 vmware55u3


> [VMware] VM snapshot fails for 12 min after instance creation
> -
>
> Key: CLOUDSTACK-8830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> ISSUE
> 
> [VMware] VM snapshot fails for 12 min after instance creation
> Environment
> ==
> Product Name: Cloudstack
> Hypervisor: VMWare VSphere 6
> VM DETAILS
> ==
> i-84987-16119-VM
> TROUBLESHOOTING
> ==
> I see that the following failure and immediate success result for the 
> CreateVMSnapshot call
> {noformat}
> 2015-07-24 08:20:55,363 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Sending  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,373 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Executing:  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,374 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:710814a5) Seq 80-6161487240196259878: 
> Executing request
> 2015-07-24 08:20:55,523 ERROR [c.c.h.v.m.VmwareStorageManagerImpl] 
> (DirectAgent-66:ctx-5fbdccd8 ussfoldcsesx112.adslab.local, 
> job-64835/job-64836, cmd: CreateVMSnapshotCommand) (logid:8b87ab8a) failed to 
> create snapshot for vm:i-84987-16119-VM due to null
> 2015-07-24 08:20:55,524 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:8b87ab8a) Seq 80-6161487240196259878: 
> Response Received: 
> 2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] (DirectAgent-66:ctx-5fbdccd8) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Processing:  { Ans: , MgmtId: 

[jira] [Commented] (CLOUDSTACK-9359) Return ip6address in Basic Networking

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606040#comment-15606040
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9359:


Github user NuxRo commented on the issue:

https://github.com/apache/cloudstack/pull/1700
  
Wido,

Will do, but need to wait until next week to get some help re getting IPv6 
on my test rig. As soon as I have feedback, I'll post here.

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Wido den Hollander" 
> To: "apache/cloudstack" 
> Cc: "NuxRo" , "Mention" 
> Sent: Friday, 21 October, 2016 16:23:30
> Subject: Re: [apache/cloudstack] CLOUDSTACK-9359: IPv6 for Basic 
Networking (#1700)

> @NuxRo The first test would be to run this code and actually see that the 
API
> returns a IPv6 address which you can then use to reach your Instance.
> 
> --
> You are receiving this because you were mentioned.
> Reply to this email directly or view it on GitHub:
> https://github.com/apache/cloudstack/pull/1700#issuecomment-255406953



> Return ip6address in Basic Networking
> -
>
> Key: CLOUDSTACK-9359
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9359
> Project: CloudStack
>  Issue Type: Sub-task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, Management Server
> Environment: CloudStack Basic Networking
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>  Labels: api, basic-networking, ipv6
> Fix For: Future
>
>
> In Basic Networking Instances will obtain their IPv6 address using SLAAC 
> (Stateless Autoconfiguration) as described in the Wiki: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+Basic+Networking
> When a ip6cidr is configured and is a /64 we can calculate the IPv6 address 
> an Instance will obtain.
> There is no need to store a IPv6 address in the database with the /64 subnet 
> (ip6cidr) and the MAC address we can calculate the address using EUI-64:
> "A 64-bit interface identifier is most commonly derived from its 48-bit MAC 
> address. A MAC address 00:0C:29:0C:47:D5 is turned into a 64-bit EUI-64 by 
> inserting FF:FE in the middle: 00:0C:29:FF:FE:0C:47:D5. When this EUI-64 is 
> used to form an IPv6 address it is modified:[1] the meaning of the 
> Universal/Local bit (the 7th most significant bit of the EUI-64, starting 
> from 1) is inverted, so that a 1 now means Universal. To create an IPv6 
> address with the network prefix 2001:db8:1:2::/64 it yields the address 
> 2001:db8:1:2:020c:29ff:fe0c:47d5 (with the underlined U/L (=Universal/Local) 
> bit inverted to a 1, because the MAC address is universally unique)."
> The API should return this address in the ip6address field for a NIC in Basic 
> Networking.
> End-Users can use this, but it can also be used internally by Security 
> Grouping to program rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9504) Fully support system VMs on managed storage

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606020#comment-15606020
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9504:


Github user mike-tutkowski commented on the issue:

https://github.com/apache/cloudstack/pull/1642
  
@rhtyd @karuturi I think we are good to go on this PR now. Do you agree?


> Fully support system VMs on managed storage
> ---
>
> Key: CLOUDSTACK-9504
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9504
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: All
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.10.0.0
>
>
> There are three related items for this ticket:
> 1) Do not permit "custom IOPS" as a parameter when creating a system offering 
> (throw an exception if this parameter is passed in).
> 2) If you transition managed storage into maintenance mode and system VMs 
> were running on that managed storage, the host-side clustered file systems 
> (SRs on XenServer) are not removed. Remove them.
> 3) Add integration tests for system VMs with managed storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8830) [VMware] VM snapshot fails for 12 min after instance creation

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605990#comment-15605990
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8830:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1677
  
LGTM for testing
```
2016-10-24 20:52:55 47 test files ro run
2016-10-24 20:52:55 Running test file: 
test/integration/smoke/test_affinity_groups_projects.py
2016-10-24 20:56:07 Running test file: 
test/integration/smoke/test_affinity_groups.py
2016-10-24 20:59:18 Running test file: 
test/integration/smoke/test_deploy_vgpu_enabled_vm.py
2016-10-24 20:59:19 Running test file: 
test/integration/smoke/test_deploy_vm_iso.py
2016-10-24 21:16:20 Running test file: 
test/integration/smoke/test_deploy_vm_root_resize.py
2016-10-24 21:16:44 Running test file: 
test/integration/smoke/test_deploy_vms_with_varied_deploymentplanners.py
2016-10-24 21:21:22 Running test file: 
test/integration/smoke/test_deploy_vm_with_userdata.py
2016-10-24 21:24:32 Running test file: 
test/integration/smoke/test_disk_offerings.py
2016-10-24 21:24:33 Running test file: 
test/integration/smoke/test_dynamicroles.py
2016-10-24 21:24:34 Running test file: 
test/integration/smoke/test_global_settings.py
2016-10-24 21:24:35 Running test file: 
test/integration/smoke/test_guest_vlan_range.py
2016-10-24 21:24:59 Running test file: test/integration/smoke/test_hosts.py
2016-10-24 21:25:00 Running test file: 
test/integration/smoke/test_internal_lb.py
2016-10-24 22:05:15 Running test file: test/integration/smoke/test_iso.py
2016-10-24 22:10:16 Running test file: 
test/integration/smoke/test_list_ids_parameter.py
2016-10-24 22:24:18 Running test file: 
test/integration/smoke/test_loadbalance.py
2016-10-24 22:45:45 Running test file: test/integration/smoke/test_login.py
2016-10-24 22:46:09 Running test file: 
test/integration/smoke/test_multipleips_per_nic.py
2016-10-24 22:48:45 Running test file: 
test/integration/smoke/test_network_acl.py
2016-10-24 22:51:21 Running test file: 
test/integration/smoke/test_network.py
2016-10-24 23:23:44 Running test file: 
test/integration/smoke/test_nic_adapter_type.py
2016-10-24 23:23:52 Running test file: test/integration/smoke/test_nic.py
2016-10-24 23:33:13 Running test file: 
test/integration/smoke/test_non_contigiousvlan.py
2016-10-24 23:33:30 Running test file: 
test/integration/smoke/test_over_provisioning.py
2016-10-24 23:33:31 Running test file: 
test/integration/smoke/test_password_server.py
2016-10-24 23:37:44 Running test file: 
test/integration/smoke/test_portable_publicip.py
2016-10-24 23:39:07 Running test file: 
test/integration/smoke/test_primary_storage.py
2016-10-24 23:39:08 Running test file: 
test/integration/smoke/test_public_ip_range.py
2016-10-24 23:39:17 Running test file: test/integration/smoke/test_pvlan.py
2016-10-24 23:39:23 Running test file: 
test/integration/smoke/test_regions.py
2016-10-24 23:39:25 Running test file: 
test/integration/smoke/test_reset_vm_on_reboot.py
2016-10-24 23:42:47 Running test file: 
test/integration/smoke/test_resource_detail.py
2016-10-24 23:43:00 Running test file: 
test/integration/smoke/test_router_dhcphosts.py
2016-10-24 23:47:42 Running test file: 
test/integration/smoke/test_routers_iptables_default_policy.py
2016-10-24 23:54:49 Running test file: 
test/integration/smoke/test_routers.py
2016-10-25 00:02:55 Running test file: 
test/integration/smoke/test_scale_vm.py
2016-10-25 00:06:15 Running test file: 
test/integration/smoke/test_secondary_storage.py
2016-10-25 00:06:18 Running test file: 
test/integration/smoke/test_service_offerings.py
2016-10-25 00:10:42 Running test file: 
test/integration/smoke/test_snapshots.py
2016-10-25 00:15:15 Running test file: test/integration/smoke/test_ssvm.py
2016-10-25 00:33:50 Running test file: 
test/integration/smoke/test_staticroles.py
2016-10-25 00:34:31 Running test file: 
test/integration/smoke/test_templates.py
2016-10-25 00:50:07 Running test file: 
test/integration/smoke/test_usage_events.py
2016-10-25 00:50:08 Running test file: 
test/integration/smoke/test_vm_life_cycle.py
2016-10-25 01:05:18 Running test file: 
test/integration/smoke/test_vm_snapshots.py
2016-10-25 01:20:58 Running test file: 
test/integration/smoke/test_volumes.py
2016-10-25 01:38:21 Running test file: 
test/integration/smoke/test_vpc_vpn.py

Test to update a physical network and extend its vlan ... === TestName: 
test_extendPhysicalNetworkVlan | Status : SUCCESS ===
ok
Test Site 2 Site VPN Across redundant VPCs ... === TestName: 
test_01_redundant_vpc_site2site_vpn | Status : SUCCESS ===
ok
Test Remote Access VPN in VPC ... === TestName: 
test_01_vpc_remote_access_vpn | 

[jira] [Commented] (CLOUDSTACK-8830) [VMware] VM snapshot fails for 12 min after instance creation

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605988#comment-15605988
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8830:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1677
  
@rhtyd @murali-reddy @PaulAngus @jburwell Thanks for fixing all outstanding 
Marvin test issues. We have now 100% passing score on Marvin integration tests 
on RHEL 6 management server , advanced networking, Vmware 5.5/6.0 hypervisors. 
Please find results for this PR


> [VMware] VM snapshot fails for 12 min after instance creation
> -
>
> Key: CLOUDSTACK-8830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> ISSUE
> 
> [VMware] VM snapshot fails for 12 min after instance creation
> Environment
> ==
> Product Name: Cloudstack
> Hypervisor: VMWare VSphere 6
> VM DETAILS
> ==
> i-84987-16119-VM
> TROUBLESHOOTING
> ==
> I see that the following failure and immediate success result for the 
> CreateVMSnapshot call
> {noformat}
> 2015-07-24 08:20:55,363 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Sending  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,373 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Executing:  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,374 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:710814a5) Seq 80-6161487240196259878: 
> Executing request
> 2015-07-24 08:20:55,523 ERROR [c.c.h.v.m.VmwareStorageManagerImpl] 
> (DirectAgent-66:ctx-5fbdccd8 ussfoldcsesx112.adslab.local, 
> job-64835/job-64836, cmd: CreateVMSnapshotCommand) (logid:8b87ab8a) failed to 
> create snapshot for vm:i-84987-16119-VM due to null
> 2015-07-24 08:20:55,524 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) 

[jira] [Commented] (CLOUDSTACK-9551) Pull KVM agent's tmp folder usage within its own folder structure

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605634#comment-15605634
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9551:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1728
  
Trillian test result (tid-180)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 26449 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1728-t180-kvm-centos7.zip
Test completed. 47 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 875.27 | 
test_vpc_redundant.py
test_oobm_zchange_password | `Failure` | 21.16 | test_outofbandmanagement.py
test_01_vpc_site2site_vpn | Success | 164.89 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.11 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.62 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 274.88 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 558.92 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 514.13 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1416.24 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 568.97 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1275.56 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.45 | test_volumes.py
test_08_resize_volume | Success | 15.40 | test_volumes.py
test_07_resize_fail | Success | 20.45 | test_volumes.py
test_06_download_detached_volume | Success | 15.28 | test_volumes.py
test_05_detach_volume | Success | 100.23 | test_volumes.py
test_04_delete_attached_volume | Success | 10.19 | test_volumes.py
test_03_download_attached_volume | Success | 15.29 | test_volumes.py
test_02_attach_volume | Success | 44.33 | test_volumes.py
test_01_create_volume | Success | 711.34 | test_volumes.py
test_deploy_vm_multiple | Success | 248.52 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.67 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 41.25 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.12 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.85 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.91 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 141.20 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.19 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 40.45 | test_templates.py
test_10_destroy_cpvm | Success | 161.86 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.73 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.59 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.82 | test_ssvm.py
test_06_stop_cpvm | Success | 131.78 | test_ssvm.py
test_05_stop_ssvm | Success | 133.83 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 3.31 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.15 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.28 | test_snapshots.py
test_04_change_offering_small | Success | 239.70 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 35.34 | test_routers.py
test_08_start_router | Success | 30.32 | test_routers.py
test_07_stop_router | Success | 10.16 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py

[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605575#comment-15605575
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9560:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
Trillian test result (tid-179)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 24929 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1726-t179-kvm-centos7.zip
Test completed. 43 look ok, 0 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_vpc_site2site_vpn | Success | 160.69 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.88 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 251.75 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 284.86 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 557.95 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 515.21 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1310.57 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 556.64 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 763.30 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1301.57 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.53 | test_volumes.py
test_08_resize_volume | Success | 15.48 | test_volumes.py
test_07_resize_fail | Success | 20.45 | test_volumes.py
test_06_download_detached_volume | Success | 15.55 | test_volumes.py
test_05_detach_volume | Success | 100.34 | test_volumes.py
test_04_delete_attached_volume | Success | 10.20 | test_volumes.py
test_03_download_attached_volume | Success | 15.44 | test_volumes.py
test_02_attach_volume | Success | 45.26 | test_volumes.py
test_01_create_volume | Success | 714.64 | test_volumes.py
test_deploy_vm_multiple | Success | 274.23 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.97 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.19 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 41.34 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.29 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 126.03 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 126.33 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.21 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.40 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 81.08 | test_templates.py
test_08_list_system_templates | Success | 0.05 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.21 | test_templates.py
test_02_edit_template | Success | 90.27 | test_templates.py
test_01_create_template | Success | 40.54 | test_templates.py
test_10_destroy_cpvm | Success | 161.68 | test_ssvm.py
test_09_destroy_ssvm | Success | 164.54 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.60 | test_ssvm.py
test_07_reboot_ssvm | Success | 163.81 | test_ssvm.py
test_06_stop_cpvm | Success | 132.02 | test_ssvm.py
test_05_stop_ssvm | Success | 164.12 | test_ssvm.py
test_04_cpvm_internals | Success | 1.32 | test_ssvm.py
test_03_ssvm_internals | Success | 3.27 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.99 | test_snapshots.py
test_04_change_offering_small | Success | 209.95 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.09 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 40.45 | test_routers.py
test_08_start_router | Success | 30.30 | test_routers.py
test_07_stop_router | Success | 10.17 | test_routers.py
test_06_router_advanced | Success | 0.05 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.68 | test_routers.py
test_03_restart_network_cleanup | 

[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605543#comment-15605543
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd closed the pull request at:

https://github.com/apache/cloudstack/pull/1729


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605542#comment-15605542
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


GitHub user rhtyd reopened a pull request:

https://github.com/apache/cloudstack/pull/1729

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack vmware-memleak-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1729.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1729


commit ed964aa2a61e3a36fcb9b5bd8fb80c3f1434
Author: Rohit Yadav 
Date:   2016-10-25T09:50:33Z

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav 




> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9504) Fully support system VMs on managed storage

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605538#comment-15605538
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9504:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1642
  
Trillian test result (tid-178)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 25671 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1642-t178-kvm-centos7.zip
Test completed. 48 look ok, 0 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_oobm_zchange_password | `Failure` | 20.50 | test_outofbandmanagement.py
test_01_vpc_site2site_vpn | Success | 175.25 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.25 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 255.94 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 310.18 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 541.03 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 515.16 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1322.44 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 559.41 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 755.94 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1296.27 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 16.10 | test_volumes.py
test_08_resize_volume | Success | 15.42 | test_volumes.py
test_07_resize_fail | Success | 20.67 | test_volumes.py
test_06_download_detached_volume | Success | 15.35 | test_volumes.py
test_05_detach_volume | Success | 100.26 | test_volumes.py
test_04_delete_attached_volume | Success | 10.21 | test_volumes.py
test_03_download_attached_volume | Success | 15.31 | test_volumes.py
test_02_attach_volume | Success | 45.61 | test_volumes.py
test_01_create_volume | Success | 735.31 | test_volumes.py
test_deploy_vm_multiple | Success | 268.64 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.76 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.16 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.94 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.88 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.84 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.19 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 90.80 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.07 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.10 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_01_create_template | Success | 40.45 | test_templates.py
test_10_destroy_cpvm | Success | 131.83 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.88 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.76 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.61 | test_ssvm.py
test_06_stop_cpvm | Success | 131.84 | test_ssvm.py
test_05_stop_ssvm | Success | 138.72 | test_ssvm.py
test_04_cpvm_internals | Success | 1.18 | test_ssvm.py
test_03_ssvm_internals | Success | 3.36 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.16 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.26 | test_snapshots.py
test_04_change_offering_small | Success | 242.72 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.09 | test_service_offerings.py
test_01_create_service_offering | Success | 0.12 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.16 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.19 | test_secondary_storage.py
test_09_reboot_router | Success | 45.42 | test_routers.py
test_08_start_router | Success | 35.35 | test_routers.py
test_07_stop_router | Success | 10.16 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py

[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605199#comment-15605199
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@rhtyd a Trillian-Jenkins test job (centos6 mgmt + vmware-55u3) has been 
kicked to run smoke tests


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605197#comment-15605197
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@blueorangutan test centos6 vmware-55u3


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605168#comment-15605168
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-94


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605126#comment-15605126
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@rhtyd I understand these words: "help", "hello", "thanks", "package", 
"test"
Test run usage: test (mgmt server, one of: centos6, centos7, ubuntu) 
(hypervisor, one of: kvm-centos6, kvm-centos7, kvm-ubuntu, xenserver-65sp1, 
xenserver-62sp1, vmware-60u2, vmware-55u3, vmware-51u1, vmware-50u1)
Authorized contributors for kicking Trillian Jenkins test jobs are: 
['rhtyd', 'jburwell', 'murali-reddy', 'abhinandanprateek', 'PaulAngus', 
'borisstoyanov', 'karuturi']


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605122#comment-15605122
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@blueorangutan help


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605110#comment-15605110
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9319:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1451
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 121
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=2
 Skipped=5

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_deploy_vm_iso.py

 * test_deploy_vm_from_iso Failing since 6 runs

* test_vm_life_cycle.py

 * test_10_attachAndDetach_iso Failing since 7 runs


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_snapshots.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_ssvm.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_routers_network_ops.py
test_disk_offerings.py


> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Assignee: Aaron Brady
>Priority: Trivial
> Fix For: 4.10.0.0
>
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605051#comment-15605051
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605049#comment-15605049
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


GitHub user rhtyd opened a pull request:

https://github.com/apache/cloudstack/pull/1729

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack vmware-memleak-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1729.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1729


commit dfabcc8a5eb0f0b78003c2f849f32e899a74ad48
Author: Rohit Yadav 
Date:   2016-10-25T09:50:33Z

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav 




> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-10-25 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-9564:
---

 Summary: Fix memory leak in VmwareContextPool
 Key: CLOUDSTACK-9564
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
Assignee: Rohit Yadav


In a recent management server crash, it was found that the largest contributor 
to memory leak was in VmwareContextPool where a registry is held (arraylist) 
that grows indefinitely. The list itself is not used anywhere or consumed. 
There exists a hashmap (pool) that returns a list of contexts for existing 
poolkey (address/username) that is used instead. The fix would be to get rid of 
the registry and limit the hashmap context list length for any poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9563) ExtractTemplate returns malformed URL after migrating NFS to s3

2016-10-25 Thread sudharma jain (JIRA)
sudharma jain created CLOUDSTACK-9563:
-

 Summary: ExtractTemplate returns malformed URL after migrating NFS 
to s3
 Key: CLOUDSTACK-9563
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9563
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: sudharma jain






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-9319:

Assignee: Aaron Brady

> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Assignee: Aaron Brady
>Priority: Trivial
> Fix For: 4.10.0.0
>
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-9319.
-
Resolution: Fixed

> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Priority: Trivial
> Fix For: 4.10.0.0
>
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-9319:

Fix Version/s: 4.10.0.0

> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Priority: Trivial
> Fix For: 4.10.0.0
>
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604935#comment-15604935
 ] 

ASF subversion and git services commented on CLOUDSTACK-9319:
-

Commit 99bb50072def07769d26440b269c7668ac12ee2c in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=99bb500 ]

Merge pull request #1451 from insom/CLOUDSTACK-9319

CLOUDSTACK-9319: Use timeout when applying config to virtual routerFrom the 
[JIRA issue](https://issues.apache.org/jira/browse/CLOUDSTACK-9319):

> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
>
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
>
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.

* pr/1451:
  Remove dangerous prototype of applyConfigToVR
  Use timeout when applying config to virtual router

Signed-off-by: Rajani Karuturi 


> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Priority: Trivial
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604933#comment-15604933
 ] 

ASF subversion and git services commented on CLOUDSTACK-9319:
-

Commit 99bb50072def07769d26440b269c7668ac12ee2c in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=99bb500 ]

Merge pull request #1451 from insom/CLOUDSTACK-9319

CLOUDSTACK-9319: Use timeout when applying config to virtual routerFrom the 
[JIRA issue](https://issues.apache.org/jira/browse/CLOUDSTACK-9319):

> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
>
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
>
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.

* pr/1451:
  Remove dangerous prototype of applyConfigToVR
  Use timeout when applying config to virtual router

Signed-off-by: Rajani Karuturi 


> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Priority: Trivial
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604934#comment-15604934
 ] 

ASF subversion and git services commented on CLOUDSTACK-9319:
-

Commit 99bb50072def07769d26440b269c7668ac12ee2c in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=99bb500 ]

Merge pull request #1451 from insom/CLOUDSTACK-9319

CLOUDSTACK-9319: Use timeout when applying config to virtual routerFrom the 
[JIRA issue](https://issues.apache.org/jira/browse/CLOUDSTACK-9319):

> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
>
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
>
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.

* pr/1451:
  Remove dangerous prototype of applyConfigToVR
  Use timeout when applying config to virtual router

Signed-off-by: Rajani Karuturi 


> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Priority: Trivial
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604936#comment-15604936
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9319:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1451


> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Priority: Trivial
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9319) Timeout is not passed to virtual router operations consistently

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604929#comment-15604929
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9319:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1451
  
Code LGTM.

BVTs are good(iso failures are URL access issues and not related to this 
PR).

merging this now


> Timeout is not passed to virtual router operations consistently
> ---
>
> Key: CLOUDSTACK-9319
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9319
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
> Environment: KVM + Ceph cloud, Ubuntu hosts.
>Reporter: Aaron Brady
>Priority: Trivial
>
> The timeout parameter is not passed down to `applyConfigToVR` inside 
> `VirtualRoutingResource` in all cases.
> This timeout is worked out as 3 seconds per command or 120 seconds (whichever 
> is larger), but because it's not passed to the first invocation, the default 
> (120 seconds, DEFAULT_EXECUTEINVR_TIMEOUT) is used.
> In a recent upgrade of our Virtual Routers, the timeout was being hit and 
> increasing `router.aggregation.command.each.timeout` had no effect. I built a 
> custom 4.8 agent with the timeout increased to allow the upgrade to continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604907#comment-15604907
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit e1202a0b06d687438203af4917badef3b3618e21 in cloudstack's branch 
refs/heads/master from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e1202a0 ]

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

Signed-off-by: Murali Reddy 

This closes #1724


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604908#comment-15604908
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit 1f50c27fc8d687dda4b941002163bc5b23412109 in cloudstack's branch 
refs/heads/master from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=1f50c27 ]

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

Signed-off-by: Murali Reddy 

This closes #1724


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604909#comment-15604909
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit 5a2a2f41b696bba7a43e528fcf2f1c0571d52b16 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5a2a2f4 ]

Merge pull request #1724 from murali-reddy/test_privategw_acl

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network
fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

* pr/1724:
  CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604899#comment-15604899
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit e1202a0b06d687438203af4917badef3b3618e21 in cloudstack's branch 
refs/heads/4.9 from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e1202a0 ]

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

Signed-off-by: Murali Reddy 

This closes #1724


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604898#comment-15604898
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit 5728ad03caf0580970f2c8226cae4440e12f4d92 in cloudstack's branch 
refs/heads/4.9 from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5728ad0 ]

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604900#comment-15604900
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit 5a2a2f41b696bba7a43e528fcf2f1c0571d52b16 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5a2a2f4 ]

Merge pull request #1724 from murali-reddy/test_privategw_acl

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network
fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

* pr/1724:
  CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604896#comment-15604896
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9511:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1724


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604894#comment-15604894
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit 5a2a2f41b696bba7a43e528fcf2f1c0571d52b16 in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5a2a2f4 ]

Merge pull request #1724 from murali-reddy/test_privategw_acl

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network
fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

* pr/1724:
  CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604893#comment-15604893
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit 5a2a2f41b696bba7a43e528fcf2f1c0571d52b16 in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5a2a2f4 ]

Merge pull request #1724 from murali-reddy/test_privategw_acl

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network
fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

* pr/1724:
  CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9562) Linux Guest VM get wrong default route when there are multiple Nic

2016-10-25 Thread sudharma jain (JIRA)
sudharma jain created CLOUDSTACK-9562:
-

 Summary: Linux Guest VM get wrong default route when there are 
multiple Nic
 Key: CLOUDSTACK-9562
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9562
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: sudharma jain






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-9127) Missing PV-bootloader-args for "SUSE Linux Enterprise Server 10 SP2 and SP3"

2016-10-25 Thread sudharma jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sudharma jain closed CLOUDSTACK-9127.
-
Assignee: sudharma jain

> Missing PV-bootloader-args for "SUSE Linux Enterprise Server 10 SP2 and SP3"
> 
>
> Key: CLOUDSTACK-9127
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9127
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>Assignee: sudharma jain
>
> STOP-START of SUSE Linux VMs fail, as PV-bootloader-args are missing during 
> the start command.
> DESCRIPTION
> ===
> Repro steps
> 1. Upload Suse ISO 
> 2. Create a VM with this ISO, and install it.
> 3. Detach ISO from the VM. 
> 4. Reboot the VM, : This will work fine, as the pv-bootloader-args are 
> not missing during reboot.
> 5.Stop the VM from CCP(VM will get destroyed in Xencenter)
> 6. Start the same VM from CCP , it will try to start but will fail with below 
> error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-9127) Missing PV-bootloader-args for "SUSE Linux Enterprise Server 10 SP2 and SP3"

2016-10-25 Thread sudharma jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sudharma jain resolved CLOUDSTACK-9127.
---
Resolution: Fixed

> Missing PV-bootloader-args for "SUSE Linux Enterprise Server 10 SP2 and SP3"
> 
>
> Key: CLOUDSTACK-9127
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9127
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>
> STOP-START of SUSE Linux VMs fail, as PV-bootloader-args are missing during 
> the start command.
> DESCRIPTION
> ===
> Repro steps
> 1. Upload Suse ISO 
> 2. Create a VM with this ISO, and install it.
> 3. Detach ISO from the VM. 
> 4. Reboot the VM, : This will work fine, as the pv-bootloader-args are 
> not missing during reboot.
> 5.Stop the VM from CCP(VM will get destroyed in Xencenter)
> 6. Start the same VM from CCP , it will try to start but will fail with below 
> error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604801#comment-15604801
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9560:


Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
@koushik-das Added the null check


> Root volume of deleted VM left unremoved
> 
>
> Key: CLOUDSTACK-9560
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: XenServer
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> In the following scenario root volume gets unremoved
> Steps to reproduce the issue
> 1. Create a VM.
> 2. Stop this VM.
> 3. On the page of the volume of the VM, click 'Download Volume' icon.
> 4. Wait for the popup screen to display and cancel out with/without clicking 
> the download link.
> 5. Destroy the VM
> Even after the corresponding VM is deleted,expunged, the root-volume is left 
> in 'Expunging' state unremoved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604768#comment-15604768
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9560:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1726#discussion_r84862649
  
--- Diff: server/src/com/cloud/storage/StorageManagerImpl.java ---
@@ -2199,15 +2199,20 @@ public void cleanupDownloadUrls(){
 if(downloadUrlCurrentAgeInSecs < 
_downloadUrlExpirationInterval){  // URL hasnt expired yet
 continue;
 }
-
-s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + 
volumeOnImageStore.getVolumeId());
+long volumeId = volumeOnImageStore.getVolumeId();
+s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + volumeId);
 
 // Remove it from image store
 ImageStoreEntity secStore = (ImageStoreEntity) 
_dataStoreMgr.getDataStore(volumeOnImageStore.getDataStoreId(), 
DataStoreRole.Image);
 
secStore.deleteExtractUrl(volumeOnImageStore.getInstallPath(), 
volumeOnImageStore.getExtractUrl(), Upload.Type.VOLUME);
 
 // Now expunge it from DB since this entry was created 
only for download purpose
 _volumeStoreDao.expunge(volumeOnImageStore.getId());
+Volume volume = _volumeDao.findById(volumeId);
+if (volume.getState() == Volume.State.Expunged)
--- End diff --

@yvsubhash It is not about the DB relationship, is it possible that some 
other thread went ahead and deleted the volume entry after the store_ref 
entries are queried?


> Root volume of deleted VM left unremoved
> 
>
> Key: CLOUDSTACK-9560
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: XenServer
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> In the following scenario root volume gets unremoved
> Steps to reproduce the issue
> 1. Create a VM.
> 2. Stop this VM.
> 3. On the page of the volume of the VM, click 'Download Volume' icon.
> 4. Wait for the popup screen to display and cancel out with/without clicking 
> the download link.
> 5. Destroy the VM
> Even after the corresponding VM is deleted,expunged, the root-volume is left 
> in 'Expunging' state unremoved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604730#comment-15604730
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9560:


Github user yvsubhash commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1726#discussion_r84859378
  
--- Diff: server/src/com/cloud/storage/StorageManagerImpl.java ---
@@ -2199,15 +2199,20 @@ public void cleanupDownloadUrls(){
 if(downloadUrlCurrentAgeInSecs < 
_downloadUrlExpirationInterval){  // URL hasnt expired yet
 continue;
 }
-
-s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + 
volumeOnImageStore.getVolumeId());
+long volumeId = volumeOnImageStore.getVolumeId();
+s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + volumeId);
 
 // Remove it from image store
 ImageStoreEntity secStore = (ImageStoreEntity) 
_dataStoreMgr.getDataStore(volumeOnImageStore.getDataStoreId(), 
DataStoreRole.Image);
 
secStore.deleteExtractUrl(volumeOnImageStore.getInstallPath(), 
volumeOnImageStore.getExtractUrl(), Upload.Type.VOLUME);
 
 // Now expunge it from DB since this entry was created 
only for download purpose
 _volumeStoreDao.expunge(volumeOnImageStore.getId());
+Volume volume = _volumeDao.findById(volumeId);
+if (volume.getState() == Volume.State.Expunged)
--- End diff --

@koushik-das Having a null for volume is not a possible scenario as there 
is db level relation ship between volume and volume_store_ref and the id is 
picked up from volume_store_ref. Would you still recommend having a null check 
here?


> Root volume of deleted VM left unremoved
> 
>
> Key: CLOUDSTACK-9560
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: XenServer
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> In the following scenario root volume gets unremoved
> Steps to reproduce the issue
> 1. Create a VM.
> 2. Stop this VM.
> 3. On the page of the volume of the VM, click 'Download Volume' icon.
> 4. Wait for the popup screen to display and cancel out with/without clicking 
> the download link.
> 5. Destroy the VM
> Even after the corresponding VM is deleted,expunged, the root-volume is left 
> in 'Expunging' state unremoved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604607#comment-15604607
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit 1f50c27fc8d687dda4b941002163bc5b23412109 in cloudstack's branch 
refs/heads/4.9 from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=1f50c27 ]

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

Signed-off-by: Murali Reddy 

This closes #1724


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604597#comment-15604597
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9511:


Github user murali-reddy closed the pull request at:

https://github.com/apache/cloudstack/pull/1724


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604598#comment-15604598
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9511:


GitHub user murali-reddy reopened a pull request:

https://github.com/apache/cloudstack/pull/1724

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical 
network


fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/murali-reddy/cloudstack test_privategw_acl

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1724.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1724


commit 5728ad03caf0580970f2c8226cae4440e12f4d92
Author: Murali Reddy 
Date:   2016-10-24T09:45:35Z

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical 
network

fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway




> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9511) fix test_privategw_acl.py to handle multiple physical networks

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604591#comment-15604591
 ] 

ASF subversion and git services commented on CLOUDSTACK-9511:
-

Commit e1202a0b06d687438203af4917badef3b3618e21 in cloudstack's branch 
refs/heads/4.8 from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e1202a0 ]

CLOUDSTACK-9511: fix test_privategw_acl.py to handle multiple physical network

fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway

Signed-off-by: Murali Reddy 

This closes #1724


> fix test_privategw_acl.py to handle multiple physical networks
> --
>
> Key: CLOUDSTACK-9511
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9511
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMware 5.5u3 + NFS Primary/Secondary Storage
> CentOS 7.2 + XenServer 6.5 + NFS Primary/Secondary Storage
>Reporter: Murali Reddy
>Assignee: Murali Reddy
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Smoke test test_privategw_acl.py works only if there is single physical 
> network in the zone. If there are separate physical networks for different 
> traffic, then test will fail, as it is hard coded to read to first physical 
> network which may not be the physical network that has guest traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604463#comment-15604463
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1727#discussion_r84839712
  
--- Diff: 
engine/storage/snapshot/src/org/apache/cloudstack/storage/vmsnapshot/DefaultVMSnapshotStrategy.java
 ---
@@ -128,7 +128,7 @@ public VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot) 
{
 if (options != null)
 quiescevm = options.needQuiesceVM();
 VMSnapshotTO target =
-new VMSnapshotTO(vmSnapshot.getId(), vmSnapshot.getName(), 
vmSnapshot.getType(), null, vmSnapshot.getDescription(), false, current, 
quiescevm);
+new VMSnapshotTO(vmSnapshot.getId(), vmSnapshot.getName(), 
vmSnapshot.getType(), null, vmSnapshot.getDescription(), false, current, 
quiescevm, userVm.getServiceOfferingId());
--- End diff --

Storing the service offering in DB along with vm snapshot entry should be 
sufficient, don't see any reason to put it in TO object.


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604462#comment-15604462
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1727#discussion_r84841681
  
--- Diff: server/src/com/cloud/vm/snapshot/VMSnapshotManagerImpl.java ---
@@ -610,7 +615,11 @@ public UserVm revertToSnapshot(Long vmSnapshotId) 
throws InsufficientCapacityExc
 VmWorkJobVO placeHolder = null;
 placeHolder = createPlaceHolderWork(vmSnapshotVo.getVmId());
 try {
-return orchestrateRevertToVMSnapshot(vmSnapshotId);
+UserVm revertedVM = 
orchestrateRevertToVMSnapshot(vmSnapshotId);
+
+updateUserVmServiceOffering(revertedVM, vmSnapshotVo);
--- End diff --

Better to put updateUserVmServiceOffering() call inside 
orchestrateRevertToVMSnapshot() as orchestrate is called from multiple places. 


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604459#comment-15604459
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1727#discussion_r84841375
  
--- Diff: server/src/com/cloud/vm/snapshot/VMSnapshotManagerImpl.java ---
@@ -639,10 +648,57 @@ else if (jobResult instanceof Throwable)
 throw new RuntimeException("Unexpected exception", 
(Throwable)jobResult);
 }
 
+updateUserVmServiceOffering(userVm, vmSnapshotVo);
--- End diff --

No need to update service offering here, the else part is just to create an 
entry in the DB for the revert to snapshot job.


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604461#comment-15604461
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1727#discussion_r84839559
  
--- Diff: engine/schema/src/com/cloud/vm/snapshot/VMSnapshotVO.java ---
@@ -248,4 +252,9 @@ public void setRemoved(Date removed) {
 public Class getEntityType() {
 return VMSnapshot.class;
 }
+
+public long getServiceOfferingId() {
--- End diff --

Please put the getter in the VMSnapshot interface and provide 
implementation here.


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604458#comment-15604458
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1727#discussion_r84839433
  
--- Diff: core/src/com/cloud/agent/api/VMSnapshotTO.java ---
@@ -123,4 +125,12 @@ public boolean getQuiescevm() {
 public void setQuiescevm(boolean quiescevm) {
 this.quiescevm = quiescevm;
 }
+
+public Long getServiceOfferingId() {
--- End diff --

See previous comment, please remove as not needed.


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604460#comment-15604460
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1727#discussion_r84839190
  
--- Diff: core/src/com/cloud/agent/api/VMSnapshotTO.java ---
@@ -35,6 +35,7 @@
 private VMSnapshotTO parent;
 private List volumes;
 private boolean quiescevm;
+private Long serviceOfferingId;
--- End diff --

Don't see any use of service offering stored here, is it needed? *TO 
objects are used to pass data from orchestration layer to agent layer.


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9438) Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604423#comment-15604423
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9438:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1615
  
@blueorangutan @rhtyd any update on the tests? Is the job running?


> Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI
> ---
>
> Key: CLOUDSTACK-9438
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9438
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Introduction
> From [9252|https://issues.apache.org/jira/browse/CLOUDSTACK-9252] it was 
> possible to configure NFS version for secondary storage mount. 
> However, changing NFS version requires inserting an new detail on 
> {{image_store_details}} table, with {{name = 'nfs.version'}} and {{value = 
> X}} where X is desired NFS version, and then restarting management server for 
> changes to take effect.
> Our improvement aims to make NFS version changeable from UI, instead of 
> previously described workflow.
> h3. Proposed solution
> Basically, NFS version is defined as an image store ConfigKey, this implied:
> * Adding a new Config scope: *ImageStore*
> * Make {{ImageStoreDetailsDao}} class to extend {{ResourceDetailsDaoBase}} 
> and {{ImageStoreDetailVO}} implement {{ResourceDetail}}
> * Insert {{'display'}} column on {{image_store_details}} table
> * Extending {{ListCfgsCmd}} and {{UpdateCfgCmd}} to support *ImageStore* 
> scope, which implied:
> ** Injecting {{ImageStoreDetailsDao}} and {{ImageStoreDao}} on 
> {{ConfigurationManagerImpl}} class, on {{cloud-server}} module.
> h4. Important
> It is important to mention that {{ImageStoreDaoImpl}} and 
> {{ImageStoreDetailsDaoImpl}} classes were moved from {{cloud-engine-storage}} 
> to {{cloud-engine-schema}} module in order to Spring find those beans to 
> inject on {{ConfigurationManagerImpl}} in {{cloud-server}} module.
> We had this maven dependencies between modules:
> * {{cloud-server --> cloud-engine-schema}}
> * {{cloud-engine-storage --> cloud-secondary-storage --> cloud-server}}
> As {{ImageStoreDaoImpl}} and {{ImageStoreDetailsDaoImpl}} were defined in 
> {{cloud-engine-storage}}, and they needed in {{cloud-server}} module, to be 
> injected on {{ConfigurationManagerImpl}}, if we added dependency from 
> {{cloud-server}} to {{cloud-engine-storage}} we would introduce a dependency 
> cycle. To avoid this cycle, we moved those classes to {{cloud-engine-schema}} 
> module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604369#comment-15604369
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9560:


Github user koushik-das commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1726#discussion_r84837338
  
--- Diff: server/src/com/cloud/storage/StorageManagerImpl.java ---
@@ -2199,15 +2199,20 @@ public void cleanupDownloadUrls(){
 if(downloadUrlCurrentAgeInSecs < 
_downloadUrlExpirationInterval){  // URL hasnt expired yet
 continue;
 }
-
-s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + 
volumeOnImageStore.getVolumeId());
+long volumeId = volumeOnImageStore.getVolumeId();
+s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + volumeId);
 
 // Remove it from image store
 ImageStoreEntity secStore = (ImageStoreEntity) 
_dataStoreMgr.getDataStore(volumeOnImageStore.getDataStoreId(), 
DataStoreRole.Image);
 
secStore.deleteExtractUrl(volumeOnImageStore.getInstallPath(), 
volumeOnImageStore.getExtractUrl(), Upload.Type.VOLUME);
 
 // Now expunge it from DB since this entry was created 
only for download purpose
 _volumeStoreDao.expunge(volumeOnImageStore.getId());
+Volume volume = _volumeDao.findById(volumeId);
+if (volume.getState() == Volume.State.Expunged)
--- End diff --

Null check on volume?


> Root volume of deleted VM left unremoved
> 
>
> Key: CLOUDSTACK-9560
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: XenServer
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> In the following scenario root volume gets unremoved
> Steps to reproduce the issue
> 1. Create a VM.
> 2. Stop this VM.
> 3. On the page of the volume of the VM, click 'Download Volume' icon.
> 4. Wait for the popup screen to display and cancel out with/without clicking 
> the download link.
> 5. Destroy the VM
> Even after the corresponding VM is deleted,expunged, the root-volume is left 
> in 'Expunging' state unremoved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9551) Pull KVM agent's tmp folder usage within its own folder structure

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604365#comment-15604365
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9551:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1728
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


> Pull KVM agent's tmp folder usage within its own folder structure
> -
>
> Key: CLOUDSTACK-9551
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9551
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.1, 4.7.1, 4.9.1.0
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>
> We ran into an issue today where the sysadmins wanted to put /tmp on its own 
> mount and set the "noexec" mount flag as a security measure. This is 
> incompatible with the CloudStack KVM agent, because it stores JNA tmp files 
> here and Java is unable to map into these objects. To get around this we 
> moved the agent's temp dir to live with the agent files, which seems like a 
> reasonable thing to do regardless of whether you're trying to secure /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9551) Pull KVM agent's tmp folder usage within its own folder structure

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604364#comment-15604364
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9551:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1728
  
@blueorangutan test


> Pull KVM agent's tmp folder usage within its own folder structure
> -
>
> Key: CLOUDSTACK-9551
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9551
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.1, 4.7.1, 4.9.1.0
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>
> We ran into an issue today where the sysadmins wanted to put /tmp on its own 
> mount and set the "noexec" mount flag as a security measure. This is 
> incompatible with the CloudStack KVM agent, because it stores JNA tmp files 
> here and Java is unable to map into these objects. To get around this we 
> moved the agent's temp dir to live with the agent files, which seems like a 
> reasonable thing to do regardless of whether you're trying to secure /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9551) Pull KVM agent's tmp folder usage within its own folder structure

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604363#comment-15604363
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9551:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1728
  
@blueorangutan package


> Pull KVM agent's tmp folder usage within its own folder structure
> -
>
> Key: CLOUDSTACK-9551
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9551
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.1, 4.7.1, 4.9.1.0
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>
> We ran into an issue today where the sysadmins wanted to put /tmp on its own 
> mount and set the "noexec" mount flag as a security measure. This is 
> incompatible with the CloudStack KVM agent, because it stores JNA tmp files 
> here and Java is unable to map into these objects. To get around this we 
> moved the agent's temp dir to live with the agent files, which seems like a 
> reasonable thing to do regardless of whether you're trying to secure /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9551) Pull KVM agent's tmp folder usage within its own folder structure

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604338#comment-15604338
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9551:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1728
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-92


> Pull KVM agent's tmp folder usage within its own folder structure
> -
>
> Key: CLOUDSTACK-9551
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9551
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.1, 4.7.1, 4.9.1.0
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>
> We ran into an issue today where the sysadmins wanted to put /tmp on its own 
> mount and set the "noexec" mount flag as a security measure. This is 
> incompatible with the CloudStack KVM agent, because it stores JNA tmp files 
> here and Java is unable to map into these objects. To get around this we 
> moved the agent's temp dir to live with the agent files, which seems like a 
> reasonable thing to do regardless of whether you're trying to secure /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)