[GitHub] cloudstack issue #1753: [4.9] Latest health test run

2016-12-15 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1753
  
Trillian test result (tid-684)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 77996 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1753-t684-vmware-55u3.zip
Test completed. 35 look ok, 14 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_VPC_default_routes | `Failure` | 963.08 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | `Failure` | 1023.96 | 
test_vpc_router_nics.py
test_05_rvpc_multi_tiers | `Failure` | 506.43 | test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
`Failure` | 364.98 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 531.84 
| test_vpc_redundant.py
test_router_dns_guestipquery | `Failure` | 277.49 | test_router_dns.py
test_router_dhcphosts | `Failure` | 193.81 | test_router_dhcphosts.py
test_04_rvpc_privategw_static_routes | `Failure` | 1729.15 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 1375.98 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 718.47 | 
test_privategw_acl.py
test_isolate_network_password_server | `Failure` | 188.76 | 
test_password_server.py
test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | `Failure` | 
836.81 | test_network.py
test_network_rules_acquired_public_ip_2_nat_rule | `Failure` | 679.70 | 
test_network.py
test_network_rules_acquired_public_ip_1_static_nat_rule | `Failure` | 
676.24 | test_network.py
test_02_port_fwd_on_non_src_nat | `Failure` | 684.03 | test_network.py
test_01_port_fwd_on_src_nat | `Failure` | 674.02 | test_network.py
test_04_rvpc_internallb_haproxy_stats_on_all_interfaces | `Failure` | 
729.92 | test_internal_lb.py
test_03_vpc_internallb_haproxy_stats_on_all_interfaces | `Failure` | 603.79 
| test_internal_lb.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | `Failure` | 1218.54 | 
test_internal_lb.py
test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 | `Failure` | 657.92 | 
test_internal_lb.py
test_01_vpc_site2site_vpn | `Error` | 532.28 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Error` | 894.80 | test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Error` | 607.74 | test_vpc_redundant.py
test_CreateTemplateWithDuplicateName | `Error` | 201.79 | test_templates.py
test_01_scale_vm | `Error` | 663.48 | test_scale_vm.py
ContextSuite context=TestRouterDHCPHosts>:teardown | `Error` | 214.06 | 
test_router_dhcphosts.py
test_01_nic | `Error` | 1249.47 | test_nic.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_01_vpc_remote_access_vpn | Success | 252.92 | test_vpc_vpn.py
test_04_rvpc_network_garbage_collector_nics | Success | 1650.21 | 
test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 866.83 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 31.15 | test_volumes.py
test_06_download_detached_volume | Success | 85.77 | test_volumes.py
test_05_detach_volume | Success | 105.29 | test_volumes.py
test_04_delete_attached_volume | Success | 15.24 | test_volumes.py
test_03_download_attached_volume | Success | 20.41 | test_volumes.py
test_02_attach_volume | Success | 64.15 | test_volumes.py
test_01_create_volume | Success | 463.30 | test_volumes.py
test_03_delete_vm_snapshots | Success | 285.25 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 242.78 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 156.61 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 166.77 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 273.06 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 182.69 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 86.22 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.09 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.14 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.18 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.22 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.17 | test_vm_life_cycle.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 15.30 | test_templates.py
test_03_delete_template | Success | 5.11 | 

[GitHub] cloudstack issue #1753: [4.9] Latest health test run

2016-12-15 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1753
  
Trillian test result (tid-685)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 46946 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1753-t685-kvm-centos7.zip
Test completed. 40 look ok, 9 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_VPC_default_routes | `Failure` | 259.58 | test_vpc_router_nics.py
test_02_redundant_VPC_default_routes | `Failure` | 229.79 | 
test_vpc_redundant.py
test_router_dhcphosts | `Failure` | 89.58 | test_router_dhcphosts.py
test_04_rvpc_privategw_static_routes | `Failure` | 441.49 | 
test_privategw_acl.py
test_isolate_network_password_server | `Failure` | 87.09 | 
test_password_server.py
test_09_delete_detached_volume | `Error` | 10.22 | test_volumes.py
test_08_resize_volume | `Error` | 5.09 | test_volumes.py
test_07_resize_fail | `Error` | 10.22 | test_volumes.py
test_06_download_detached_volume | `Error` | 5.09 | test_volumes.py
test_05_detach_volume | `Error` | 5.10 | test_volumes.py
test_04_delete_attached_volume | `Error` | 5.09 | test_volumes.py
test_03_download_attached_volume | `Error` | 5.09 | test_volumes.py
test_01_create_volume | `Error` | 318.16 | test_volumes.py
ContextSuite context=TestTemplates>:setup | `Error` | 385.91 | 
test_templates.py
ContextSuite context=TestRouterDHCPHosts>:teardown | `Error` | 225.55 | 
test_router_dhcphosts.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_01_vpc_site2site_vpn | Success | 176.26 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 76.23 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 316.17 | test_vpc_vpn.py
test_01_VPC_nics_after_destroy | Success | 837.48 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 537.34 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1320.89 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 584.14 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1352.33 | 
test_vpc_redundant.py
test_02_attach_volume | Success | 78.83 | test_volumes.py
test_deploy_vm_multiple | Success | 363.87 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.70 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 45.98 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.84 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.85 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.18 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.84 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 121.41 | test_templates.py
test_01_create_template | Success | 50.49 | test_templates.py
test_10_destroy_cpvm | Success | 162.10 | test_ssvm.py
test_09_destroy_ssvm | Success | 199.07 | test_ssvm.py
test_08_reboot_cpvm | Success | 161.92 | test_ssvm.py
test_07_reboot_ssvm | Success | 144.88 | test_ssvm.py
test_06_stop_cpvm | Success | 162.07 | test_ssvm.py
test_05_stop_ssvm | Success | 145.40 | test_ssvm.py
test_04_cpvm_internals | Success | 1.62 | test_ssvm.py
test_03_ssvm_internals | Success | 5.64 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.15 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.30 | test_snapshots.py
test_04_change_offering_small | Success | 240.41 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.05 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.10 | test_service_offerings.py
test_01_create_service_offering | Success | 0.14 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 45.39 | test_routers.py
test_08_start_router | Success | 45.38 | test_routers.py
test_07_stop_router | Success | 10.20 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.72 | test_routers.py
test_03_restart_network_cleanup | Success | 65.57 | test_routers.py
test_02_router_internal_adv | Success | 1.10 | test_routers.py

[GitHub] cloudstack pull request #1726: CLOUDSTACK-9560 Root volume of deleted VM lef...

2016-12-15 Thread rhtyd
Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1726#discussion_r92754533
  
--- Diff: server/src/com/cloud/storage/StorageManagerImpl.java ---
@@ -2199,15 +2199,20 @@ public void cleanupDownloadUrls(){
 if(downloadUrlCurrentAgeInSecs < 
_downloadUrlExpirationInterval){  // URL hasnt expired yet
 continue;
 }
-
-s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + 
volumeOnImageStore.getVolumeId());
+long volumeId = volumeOnImageStore.getVolumeId();
+s_logger.debug("Removing download url " + 
volumeOnImageStore.getExtractUrl() + " for volume id " + volumeId);
 
 // Remove it from image store
 ImageStoreEntity secStore = (ImageStoreEntity) 
_dataStoreMgr.getDataStore(volumeOnImageStore.getDataStoreId(), 
DataStoreRole.Image);
 
secStore.deleteExtractUrl(volumeOnImageStore.getInstallPath(), 
volumeOnImageStore.getExtractUrl(), Upload.Type.VOLUME);
 
 // Now expunge it from DB since this entry was created 
only for download purpose
 _volumeStoreDao.expunge(volumeOnImageStore.getId());
+Volume volume = _volumeDao.findById(volumeId);
+if (volume != null && volume.getState() == 
Volume.State.Expunged)
+{
+_volumeDao.remove(volumeId);
+}
--- End diff --

@yvsubhash can you have a look at @ustcweizhou 's comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1726: CLOUDSTACK-9560 Root volume of deleted VM left unrem...

2016-12-15 Thread yvsubhash
Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
@rhtyd  Can you please merge this 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: patchviasocket seems to be broken with qemu 2.3(+?)

2016-12-15 Thread Syahrul Sazli Shaharir

On 2016-12-16 11:27, Syahrul Sazli Shaharir wrote:

On Wed, 26 Oct 2016, Linas ?ilinskas wrote:

So after some investigation I've found out that qemu 2.3.0 is indeed 
broken, at least the way CS uses the qemu chardev/socket.


Not sure in which specific version it happened, but it was fixed in 
2.4.0-rc3, specifically noting that CloudStack 4.2 was not working.


qemu git commit: 4bf1cb03fbc43b0055af60d4ff093d6894aa4338

Also attaching the patch from that commit.


For our own purposes i've included the patch to the qemu-kvm-ev 
package (2.3.0) and all is well.


Hi,

I am facing the exact same issue on latest Cloudstack 4.9.0.1, on
latest CentOS 7.3.1611, with latest qemu-kvm-ev-2.6.0-27.1.el7
package.

The issue initially surfaced following a heartbeat-induced reset of
all hosts, when it was on CS 4.8 @ CentOS 7.0 and stock
qemu-kvm-1.5.3. Since then, the patchviasocket.pl/py timeouts
persisted for 1 out of 4 router VM/networks, even after upgrading to
latest code. (I have checked the qemu-kvm-ev-2.6.0-27.1.el7 source,
and the patched code are pretty much still intact, as per the
2.4.0-rc3 commit).

Any help would be greatly appreciated.

Thanks.

(Attached are some debug logs from the host's agent.log)


Here are the debug logs as mentioned: http://pastebin.com/yHdsMNzZ

Thanks.



--sazli




On 2016-10-20 09:59, Linas ?ilinskas wrote:


 Hi.

 We have made an upgrade to 4.9.

 Custom build packages with our own patches, which in my mind (i'm 
the only

 one patching those) should not affect the issue i'll describe.

 I'm not sure whether we didn't notice it before, or it's actually 
related

 to something in 4.9

 Basically our system vm's were unable to be patched via the qemu 
socket.
 The script simply error'ed out with a timeout while trying to push 
the

 data to the socket.

 Executing it manually (with cmd line from the logs) resulted the 
same. I

 even tried the old perl variant, which also had same result.

 So finally we found out that this issue happens only on our HVs 
which run
 qemu 2.3.0, from the centos 7 special interest virtualization repo. 
Other
 ones that run qemu 1.5, from official repos, can patch the system 
vms

 fine.

 So i'm wondering if anyone tested 4.9 with kvm with qemu >= 2.x? 
Maybe it
 something else special in our setup. e.g. we're running the HVs from 
a
 preconfigured netboot image (pxe), but all of them, including those 
with

 qemu 1.5, so i have no idea.


 Linas ?ilinskas
 Head of Development
 website  facebook
  twitter
  linkedin
 

 Host1Plus is a division of Digital Energy Technologies Ltd.

 26 York Street, London W1U 6PZ, United Kingdom



Linas ?ilinskas
Head of Development
website  facebook 
 twitter 
 linkedin 



Host1Plus is a division of Digital Energy Technologies Ltd.

26 York Street, London W1U 6PZ, United Kingdom





--
--sazli


[ HP | Dell | Microsoft | Symantec | Server & Network Infrastructure ]
W : www.modern.com.my


Re: patchviasocket seems to be broken with qemu 2.3(+?)

2016-12-15 Thread Syahrul Sazli Shaharir

On Wed, 26 Oct 2016, Linas ?ilinskas wrote:

So after some investigation I've found out that qemu 2.3.0 is indeed broken, 
at least the way CS uses the qemu chardev/socket.


Not sure in which specific version it happened, but it was fixed in 
2.4.0-rc3, specifically noting that CloudStack 4.2 was not working.


qemu git commit: 4bf1cb03fbc43b0055af60d4ff093d6894aa4338

Also attaching the patch from that commit.


For our own purposes i've included the patch to the qemu-kvm-ev package 
(2.3.0) and all is well.


Hi,

I am facing the exact same issue on latest Cloudstack 4.9.0.1, on latest 
CentOS 7.3.1611, with latest qemu-kvm-ev-2.6.0-27.1.el7 package.


The issue initially surfaced following a heartbeat-induced reset of all 
hosts, when it was on CS 4.8 @ CentOS 7.0 and stock qemu-kvm-1.5.3. Since 
then, the patchviasocket.pl/py timeouts persisted for 1 out of 4 router 
VM/networks, even after upgrading to latest code. (I have checked the 
qemu-kvm-ev-2.6.0-27.1.el7 source, and the patched code are pretty much 
still intact, as per the 2.4.0-rc3 commit).


Any help would be greatly appreciated.

Thanks.

(Attached are some debug logs from the host's agent.log)

--sazli




On 2016-10-20 09:59, Linas ?ilinskas wrote:


 Hi.

 We have made an upgrade to 4.9.

 Custom build packages with our own patches, which in my mind (i'm the only
 one patching those) should not affect the issue i'll describe.

 I'm not sure whether we didn't notice it before, or it's actually related
 to something in 4.9

 Basically our system vm's were unable to be patched via the qemu socket.
 The script simply error'ed out with a timeout while trying to push the
 data to the socket.

 Executing it manually (with cmd line from the logs) resulted the same. I
 even tried the old perl variant, which also had same result.

 So finally we found out that this issue happens only on our HVs which run
 qemu 2.3.0, from the centos 7 special interest virtualization repo. Other
 ones that run qemu 1.5, from official repos, can patch the system vms
 fine.

 So i'm wondering if anyone tested 4.9 with kvm with qemu >= 2.x? Maybe it
 something else special in our setup. e.g. we're running the HVs from a
 preconfigured netboot image (pxe), but all of them, including those with
 qemu 1.5, so i have no idea.


 Linas ?ilinskas
 Head of Development
 website  facebook
  twitter
  linkedin
 

 Host1Plus is a division of Digital Energy Technologies Ltd.

 26 York Street, London W1U 6PZ, United Kingdom



Linas ?ilinskas
Head of Development
website  facebook 
 twitter  
linkedin 


Host1Plus is a division of Digital Energy Technologies Ltd.

26 York Street, London W1U 6PZ, United Kingdom




[GitHub] cloudstack issue #1749: CLOUDSTACK-9619: Updates for SAN-assisted snapshots

2016-12-15 Thread mike-tutkowski
Github user mike-tutkowski commented on the issue:

https://github.com/apache/cloudstack/pull/1749
  
The semi-unusual part about this PR, @rhtyd, is that is contains code 
jointly developed by me and @syed.

He has reviewed my contributions and I have reviewed his.

We just need someone to review the entire thing. There's not a lot of code 
there, so it shouldn't take too long, I suspect.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1749: CLOUDSTACK-9619: Updates for SAN-assisted snapshots

2016-12-15 Thread mike-tutkowski
Github user mike-tutkowski commented on the issue:

https://github.com/apache/cloudstack/pull/1749
  
Hi @rhtyd We have one LTGM (from @syed) and I have run and posted all 
regression tests for managed storage. If we could have one more person take a 
look, that would be great. I don't think we'll find much in these changes to 
discuss. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1749: CLOUDSTACK-9619: Updates for SAN-assisted snapshots

2016-12-15 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1749
  
Trillian test result (tid-683)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 46910 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1749-t683-kvm-centos7.zip
Test completed. 42 look ok, 8 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_VPC_default_routes | `Failure` | 206.88 | test_vpc_router_nics.py
test_02_redundant_VPC_default_routes | `Failure` | 273.89 | 
test_vpc_redundant.py
test_router_dhcphosts | `Failure` | 24.13 | test_router_dhcphosts.py
test_04_rvpc_privategw_static_routes | `Failure` | 586.22 | 
test_privategw_acl.py
test_isolate_network_password_server | `Failure` | 23.86 | 
test_password_server.py
test_09_delete_detached_volume | `Error` | 10.17 | test_volumes.py
test_08_resize_volume | `Error` | 5.08 | test_volumes.py
test_07_resize_fail | `Error` | 10.18 | test_volumes.py
test_06_download_detached_volume | `Error` | 5.12 | test_volumes.py
test_05_detach_volume | `Error` | 5.12 | test_volumes.py
test_04_delete_attached_volume | `Error` | 5.07 | test_volumes.py
test_03_download_attached_volume | `Error` | 5.07 | test_volumes.py
test_01_create_volume | `Error` | 292.25 | test_volumes.py
ContextSuite context=TestRouterDHCPHosts>:teardown | `Error` | 160.34 | 
test_router_dhcphosts.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_01_vpc_site2site_vpn | Success | 190.07 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 101.15 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 354.85 | test_vpc_vpn.py
test_01_VPC_nics_after_destroy | Success | 838.17 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 527.02 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1396.02 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 635.95 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1322.93 | 
test_vpc_redundant.py
test_02_attach_volume | Success | 79.64 | test_volumes.py
test_deploy_vm_multiple | Success | 368.06 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.67 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.22 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.67 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.09 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.71 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.70 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.13 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.67 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 206.40 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.05 | test_templates.py
test_04_extract_template | Success | 5.49 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.09 | test_templates.py
test_01_create_template | Success | 85.69 | test_templates.py
test_10_destroy_cpvm | Success | 161.52 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.34 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.56 | test_ssvm.py
test_07_reboot_ssvm | Success | 174.64 | test_ssvm.py
test_06_stop_cpvm | Success | 101.48 | test_ssvm.py
test_05_stop_ssvm | Success | 175.58 | test_ssvm.py
test_04_cpvm_internals | Success | 1.04 | test_ssvm.py
test_03_ssvm_internals | Success | 5.09 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 21.38 | test_snapshots.py
test_04_change_offering_small | Success | 239.84 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.09 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.10 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.14 | test_secondary_storage.py
test_09_reboot_router | Success | 40.28 | test_routers.py
test_08_start_router | Success | 40.29 | test_routers.py
test_07_stop_router | Success | 10.12 | test_routers.py
test_06_router_advanced | Success | 

[GitHub] cloudstack issue #1753: [4.9] Latest health test run

2016-12-15 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1753
  
Trillian test result (tid-679)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 6
Total time taken: 53078 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1753-t679-xenserver-65sp1.zip
Test completed. 42 look ok, 7 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_VPC_default_routes | `Failure` | 343.96 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | `Failure` | 713.45 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1422.00 | 
test_vpc_redundant.py
test_02_redundant_VPC_default_routes | `Failure` | 548.10 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 610.00 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 908.17 | 
test_privategw_acl.py
test_isolate_network_password_server | `Failure` | 34.42 | 
test_password_server.py
test_05_rvpc_multi_tiers | `Error` | 1027.10 | test_vpc_redundant.py
ContextSuite context=TestVPCRedundancy>:teardown | `Error` | 1032.31 | 
test_vpc_redundant.py
test_router_dhcphosts | `Error` | 15.39 | test_router_dhcphosts.py
ContextSuite context=TestRouterDHCPHosts>:teardown | `Error` | 35.83 | 
test_router_dhcphosts.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_01_vpc_site2site_vpn | Success | 373.12 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 187.90 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 599.60 | test_vpc_vpn.py
test_01_VPC_nics_after_destroy | Success | 741.35 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 967.87 | test_vpc_redundant.py
test_09_delete_detached_volume | Success | 21.34 | test_volumes.py
test_08_resize_volume | Success | 121.47 | test_volumes.py
test_07_resize_fail | Success | 116.33 | test_volumes.py
test_06_download_detached_volume | Success | 30.51 | test_volumes.py
test_05_detach_volume | Success | 100.38 | test_volumes.py
test_04_delete_attached_volume | Success | 10.27 | test_volumes.py
test_03_download_attached_volume | Success | 15.38 | test_volumes.py
test_02_attach_volume | Success | 10.84 | test_volumes.py
test_01_create_volume | Success | 405.69 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.25 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 231.76 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 101.58 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 294.48 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 32.15 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.28 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 76.40 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.15 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.19 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.24 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.32 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.30 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 257.42 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.07 | test_templates.py
test_04_extract_template | Success | 5.33 | test_templates.py
test_03_delete_template | Success | 5.15 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_01_create_template | Success | 136.23 | test_templates.py
test_10_destroy_cpvm | Success | 227.13 | test_ssvm.py
test_09_destroy_ssvm | Success | 229.37 | test_ssvm.py
test_08_reboot_cpvm | Success | 146.79 | test_ssvm.py
test_07_reboot_ssvm | Success | 174.67 | test_ssvm.py
test_06_stop_cpvm | Success | 167.46 | test_ssvm.py
test_05_stop_ssvm | Success | 212.04 | test_ssvm.py
test_04_cpvm_internals | Success | 1.15 | test_ssvm.py
test_03_ssvm_internals | Success | 15.06 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.36 | test_ssvm.py
test_01_snapshot_root_disk | Success | 31.90 | test_snapshots.py
test_04_change_offering_small | Success | 121.56 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.05 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.10 | test_service_offerings.py
test_01_create_service_offering | 

[GitHub] cloudstack issue #1829: CLOUDSTACK-9363: Fix HVM VM restart bug in XenServer

2016-12-15 Thread syed
Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@koushik-das I like that fix. I've modified my fix to do a better check. 
@rhtyd I've rebased to 4.9 as well. 
Thank you guys for the prompt replies :)



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1832: CLOUDSTACK-9652 Job framework - Cancelling as...

2016-12-15 Thread marcaurele
Github user marcaurele commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1832#discussion_r92676861
  
--- Diff: 
engine/orchestration/src/com/cloud/agent/manager/AgentAttache.java ---
@@ -399,10 +414,22 @@ public void send(final Request req, final Listener 
listener) throws AgentUnavail
 try {
 for (int i = 0; i < 2; i++) {
 Answer[] answers = null;
+job = _agentMgr._asyncJobDao.findById(jobId);
+if (job != null && job.getStatus() == 
JobInfo.Status.CANCELLED) {
+throw new 
OperationCancelledException(req.getCommands(), _id, seq, wait, false);
+}
 try {
 answers = sl.waitFor(wait);
+job = _agentMgr._asyncJobDao.findById(jobId);
+if (job != null && job.getStatus() == 
JobInfo.Status.CANCELLED) {
+throw new 
OperationCancelledException(req.getCommands(), _id, seq, wait, false);
--- End diff --

Why do you want to throw an `OperationCancelledException` if the we have 
the job answer. It's better to let the normal response come back to the user.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1832: CLOUDSTACK-9652 Job framework - Cancelling async job...

2016-12-15 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1832
  
@karuturi Ok thanks for the clarifications, and it's the scenario I thought 
about too. That being said, I'm currently thinking of a new approach for the 
command sequencer because having implemented the live migration, the 
non-parallel commands isn't optimal at all when you have long running 
sequential commands on a hypervisor. And I tend to think that's the reason 
behind your PR, isn't it? The way it's currently done is too simple (if a job 
cannot be run in parallel on the HV, it will put in a queue any other coming 
job that needs to run on this same HV). IMO this sequencing should take into 
account what kind of job is coming and for which type of resources. For example 
a security group update, a VM start and a migration for different VMs should be 
able to run in parallel because they are unrelated. With today design, it isn't 
possible.

So don't you think we're better of rewriting the sequencer to let more 
commands being executed in parallel to avoid this bottleneck on the 
AgentAttache? It would normally make the cancellation not needed in the way you 
implemented it since less jobs will be queued.

If we wish to be able to cancel a job, IMHO it should cancel the job down 
on the hypervisor too, thus clearing normally the resources involved as if the 
execution didn't go well.

Otherwise, the way you implemented it. I would not let a job being cancel 
if it has been sent to the hypervisor to clearly return to the user that it 
wasn't cancelable anymore (you're too late! -> seq number isn't in `_requests` 
anymore so it has been sent to the HV). I'm putting more comment in the code.

What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1832: CLOUDSTACK-9652 Job framework - Cancelling async job...

2016-12-15 Thread karuturi
Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1832
  
@marcaurele The thread which is running the job gets killed due to 
OperationCancelledException. But, if any command is already sent and is being 
executed on hypervisor, that wont be cancelled. check for the changes in 
AgentAttache where this new exception is thrown.
For example, if a deployvm is cancelled and the command is already sent to 
hypervisor, the vm will continue to launch on hypervisor. but, on cloudstack 
side, the threads and the jobs will be cleanedup and cancelled. Eventually, vm 
sync will sync the states. There can be instances when the job cancellation was 
successful but, the vm is in running state after sometime. 
The cancellation should be used with caution only by admin keeping in mind 
that some resource cleanup on hypervisor might be required. 
This will only unblock the jobs in cloudstack which are waiting for a long 
time for a certain job to complete.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1832: CLOUDSTACK-9652 Job framework - Cancelling async job...

2016-12-15 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1832
  
@karuturi It's a good feature but I don't see where the job gets actually 
cancelled. Reading the changes I can only see that the response of the job will 
become a cancelled operation, but the actual job does not get kill/cancel on 
the hypervisor for example. Did I miss something or is it not fully implemented 
yet?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1812: CLOUDSTACK-9650: Allow starting VMs regardless of cp...

2016-12-15 Thread wido
Github user wido commented on the issue:

https://github.com/apache/cloudstack/pull/1812
  
Ok, understood. Get it.

Based on the code changes: LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #873: CLOUDSTACK-8896: allocated percentage of storage pool...

2016-12-15 Thread karuturi
Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/873
  
@rhtyd will do in coming days (though I dont see a reason for the stress on 
log message leaving aside logic)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1725: CLOUDSTACK-9559 Why allow deleting zone witho...

2016-12-15 Thread yvsubhash
GitHub user yvsubhash reopened a pull request:

https://github.com/apache/cloudstack/pull/1725

CLOUDSTACK-9559  Why allow deleting zone without deleting the seconda…

CLOUDSTACK-9559  allow deleting zone without deleting the secondary storage 
under the zone


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yvsubhash/cloudstack CLOUDSTACK-9559

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1725.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1725


commit 1c7722d7cc9c60ff1edcbecb4f8046a6cfb4c9c8
Author: subhash_y 
Date:   2016-10-04T11:57:43Z

CLOUDSTACK-9559  Why allow deleting zone without deleting the secondary 
storage under the zone




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1725: CLOUDSTACK-9559 Why allow deleting zone witho...

2016-12-15 Thread yvsubhash
Github user yvsubhash closed the pull request at:

https://github.com/apache/cloudstack/pull/1725


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)

2016-12-15 Thread Wido den Hollander
+1 (binding)

Upgraded from 4.9.0 to 4.9.1.0 on a running system with:

- KVM
- Basic Networking
- Ubuntu Management Server

Build .deb packages from source and installed them. Worked as planned.

Wido

> Op 15 december 2016 om 10:35 schreef Rohit Yadav :
> 
> 
> +1 (binding)
> 
> 
> - Based on upgrade tests from 4.5.2.2 to 4.9.1.0
> 
> - Upgrading a local 4.9.0 based Ubuntu/kvm setup
> 
> - Manual packaging and repository building
> 
> - Travis and Trillian test results from PR #1753 (vpc/rvr failures are known 
> to be intermittent, the feature implementation/usage is flaky)
> 
> 
> Regards.
> 
> 
> 
> From: Boris Stoyanov 
> Sent: 15 December 2016 14:46:02
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)
> 
> Hello,
> 
> It turned out it’s an issue with the way the upgrade was performed, I’ve run 
> the cloudstack-setup-databases on top of 4.9 code base and it basically 
> created the 4.9 schema, the when I’ve imported the backups on DB level it 
> didn’t overwrite the 4.9 changes and they were still there with DB version of 
> 4.5.2.2. So that’s how the upgrade got messed, I’ve just verified it’s not 
> happening if I cleanup the DBs before importing cloud db.
> 
> I’ll proceed with rest of the upgrade paths now and will keep you posted.
> 
> Thanks,
> Boris Stoyanov
> 
> 
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 
> > On Dec 15, 2016, at 10:54 AM, Milamber  wrote:
> >
> > Hi Rohit,
> >
> > Thanks for the good way to fix this issue.
> >
> > I've tested too the upgrade from 4.9.1.0 RC1 to 4.10.0.0 snapshot with 
> > success now.
> >
> > Milamber
> >
> > On 15/12/2016 06:05, Rohit Yadav wrote:
> >> Hi Bruno,
> >>
> >>
> >> I checked the issue, the PR #1615 was merged only on master. Since, we've 
> >> not started release work for 4.10/master yet, I'll move the changes to 
> >> appropriate sql schema. This is not a blocker for either 4.8.2.0 or 
> >> 4.9.1.0 as the changes/feature don't exist on them yet.
> >>
> >> Thanks again for sharing this, I'll fix them today.
> >>
> >>
> >> All, since there is no blocker please continue with your tests and voting.
> >>
> >>
> >> Regards.
> >>
> >> 
> >> From: Milamber 
> >> Sent: 14 December 2016 00:32:19
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)
> >>
> >>
> >> Hello,
> >>
> >> I'm not sure, but perhaps we have an 'indirect' blocker for 4.9.1.0 RC1.
> >>
> >> Currently the upgrade from 4.9.1.0 RC1 to 4.10.0.0 SNAPSHOT don't works
> >> because the schema-481to490.sql in the 4.9.1.0 / 4.9 doesn't contains
> >> the commit of CLOUDSTACK-9438 (2e77496601ab5420723ce8b955b3960faaba7d5c).
> >> (this commit is currently in master)
> >>
> >> When you try to make the upgrade you have this error: "Unknown column
> >> 'image_store_details.display' in 'field list'"
> >> see:  https://issues.apache.org/jira/browse/CLOUDSTACK-9671
> >>
> >> Currently there have 2 diff between the schema-481to490.sql in 4.9.1.0
> >> RC1 (and 4.9 branch) and master branch.
> >>
> >> What is your opinion: blocker or not?
> >>
> >> Just copy the 2 sql request inside the schema-4910to41000.sql file (but
> >> the commit 2e77496601ab5420723ce8b955b3960faaba7d5c contains other
> >> modified files)?
> >>
> >> Milamber
> >>
> >>
> >>
> >> $ diff ./setup/db/db/schema-481to490.sql /tmp/MASTER-schema-481to490.sql
> >> 547a548,552
> >>  >
> >>  > ALTER TABLE `cloud`.`image_store_details` CHANGE COLUMN `value`
> >> `value` VARCHAR(255) NULL DEFAULT NULL COMMENT 'value of the detail',
> >> ADD COLUMN `display` tinyint(1) NOT
> >>  > NULL DEFAULT '1' COMMENT 'True if the detail can be displayed to the
> >> end user' AFTER `value`;
> >>  >
> >>  > ALTER TABLE `snapshots` ADD COLUMN `location_type` VARCHAR(32)
> >> COMMENT 'Location of snapshot (ex. Primary)';
> >>
> >>
> >>
> >>
> >>
> >> On 12/12/2016 21:36, Milamber wrote:
> >>> Hello,
> >>>
> >>> My vote +1 (binding)
> >>>
> >>> Tests are passed on a virtual topology of servers  (CS over CS)
> >>> (1mgr+2nodes+1nfs) :
> >>>
> >>> 1/ Fresh install of 4.9.1.0 RC1 (adv net) on Ubuntu 14.04.5 + KVM +
> >>> NFS : OK
> >>> Some standard tests with success (create vm, migration, HA, create
> >>> networks, create user, create ssh key, destroy vm, register template,
> >>> create snapshot, restore snapshot, create template, ip association, ip
> >>> release, static nat, firewall rule)
> >>> Some tests with cloudstack ansible module with success too (create
> >>> network, register templates, create vm, ip, firewall rule)
> >>>
> >>> 2/ Test upgrade from 4.8.1 to 4.9.1.0 RC1 : OK
> >>>
> >>> 

[GitHub] cloudstack pull request #1832: cloudstack-9652 Job framework - Cancelling as...

2016-12-15 Thread karuturi
GitHub user karuturi opened a pull request:

https://github.com/apache/cloudstack/pull/1832

cloudstack-9652 Job framework - Cancelling async jobs

enabled  cancellation of long running or subsequent queued up async jobs


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9652

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1832.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1832


commit 1099b3eb1adbb4dd028a86877986e3ac7771
Author: Rajani Karuturi 
Date:   2016-12-13T11:55:44Z

CLOUDSTACK-9652: added new cancel async job api

commit d9f4f6e898f063096d4bace46753b179c0bb68dc
Author: Rajani Karuturi 
Date:   2016-07-12T05:45:32Z

CLOUDSTACK-9652 unittests for cancelAsyncJob cmd

commit 5093e592b7e71978f50ae8007c1da985758d6efd
Author: Rajani Karuturi 
Date:   2016-12-14T09:35:37Z

CLOUDSTACK-9652 API to find long running jobs

added listlongrunnningjobs api

added listqueuedupasyncjobs api

commit 649f5b9237b85c1c2aba67065b13261ec4930487
Author: Rajani Karuturi 
Date:   2016-12-14T10:14:24Z

CLOUDSTACK-9652 Cleanup at Agent Layer

Throwing an exception in agentattache incase the job is cancelled. The
top layers should handle the exception and take necessary action for
cleaning of resources.

commit cebbd36f94914f144f5cd9b016467d18f02c222c
Author: Rajani Karuturi 
Date:   2016-12-14T10:34:12Z

CLOUDSTACK-9652 Annotating async APIs as cancellable or not

commit e2df57ca6ba642cf120295cf805d04881e7c14cf
Author: Rajani Karuturi 
Date:   2016-12-14T11:43:53Z

CLOUDSTACK-9652: added OperationCancelledException for a cancelled job

Added exception handling at various agent commands for this new
checked exception.

commit ebfc75879deb9b7de775fe9d92f361d432852b91
Author: Rajani Karuturi 
Date:   2016-07-22T09:38:11Z

CLOUDSTACK-9652 cancelling a job in queue should not throw exception

check the state of the parent job before submitting the worker thread.
starting work thread only if parent job is not done.

commit adb19cb2788706b66e7a1454327d541b2ba0eeae
Author: Rajani Karuturi 
Date:   2016-12-15T09:53:43Z

CLOUDSTACK-9652 cleaning up async jobs on graceful MS shutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)

2016-12-15 Thread Rohit Yadav
+1 (binding)


- Based on upgrade tests from 4.5.2.2 to 4.9.1.0

- Upgrading a local 4.9.0 based Ubuntu/kvm setup

- Manual packaging and repository building

- Travis and Trillian test results from PR #1753 (vpc/rvr failures are known to 
be intermittent, the feature implementation/usage is flaky)


Regards.



From: Boris Stoyanov 
Sent: 15 December 2016 14:46:02
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)

Hello,

It turned out it’s an issue with the way the upgrade was performed, I’ve run 
the cloudstack-setup-databases on top of 4.9 code base and it basically created 
the 4.9 schema, the when I’ve imported the backups on DB level it didn’t 
overwrite the 4.9 changes and they were still there with DB version of 4.5.2.2. 
So that’s how the upgrade got messed, I’ve just verified it’s not happening if 
I cleanup the DBs before importing cloud db.

I’ll proceed with rest of the upgrade paths now and will keep you posted.

Thanks,
Boris Stoyanov


boris.stoya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

> On Dec 15, 2016, at 10:54 AM, Milamber  wrote:
>
> Hi Rohit,
>
> Thanks for the good way to fix this issue.
>
> I've tested too the upgrade from 4.9.1.0 RC1 to 4.10.0.0 snapshot with 
> success now.
>
> Milamber
>
> On 15/12/2016 06:05, Rohit Yadav wrote:
>> Hi Bruno,
>>
>>
>> I checked the issue, the PR #1615 was merged only on master. Since, we've 
>> not started release work for 4.10/master yet, I'll move the changes to 
>> appropriate sql schema. This is not a blocker for either 4.8.2.0 or 4.9.1.0 
>> as the changes/feature don't exist on them yet.
>>
>> Thanks again for sharing this, I'll fix them today.
>>
>>
>> All, since there is no blocker please continue with your tests and voting.
>>
>>
>> Regards.
>>
>> 
>> From: Milamber 
>> Sent: 14 December 2016 00:32:19
>> To: dev@cloudstack.apache.org
>> Subject: Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)
>>
>>
>> Hello,
>>
>> I'm not sure, but perhaps we have an 'indirect' blocker for 4.9.1.0 RC1.
>>
>> Currently the upgrade from 4.9.1.0 RC1 to 4.10.0.0 SNAPSHOT don't works
>> because the schema-481to490.sql in the 4.9.1.0 / 4.9 doesn't contains
>> the commit of CLOUDSTACK-9438 (2e77496601ab5420723ce8b955b3960faaba7d5c).
>> (this commit is currently in master)
>>
>> When you try to make the upgrade you have this error: "Unknown column
>> 'image_store_details.display' in 'field list'"
>> see:  https://issues.apache.org/jira/browse/CLOUDSTACK-9671
>>
>> Currently there have 2 diff between the schema-481to490.sql in 4.9.1.0
>> RC1 (and 4.9 branch) and master branch.
>>
>> What is your opinion: blocker or not?
>>
>> Just copy the 2 sql request inside the schema-4910to41000.sql file (but
>> the commit 2e77496601ab5420723ce8b955b3960faaba7d5c contains other
>> modified files)?
>>
>> Milamber
>>
>>
>>
>> $ diff ./setup/db/db/schema-481to490.sql /tmp/MASTER-schema-481to490.sql
>> 547a548,552
>>  >
>>  > ALTER TABLE `cloud`.`image_store_details` CHANGE COLUMN `value`
>> `value` VARCHAR(255) NULL DEFAULT NULL COMMENT 'value of the detail',
>> ADD COLUMN `display` tinyint(1) NOT
>>  > NULL DEFAULT '1' COMMENT 'True if the detail can be displayed to the
>> end user' AFTER `value`;
>>  >
>>  > ALTER TABLE `snapshots` ADD COLUMN `location_type` VARCHAR(32)
>> COMMENT 'Location of snapshot (ex. Primary)';
>>
>>
>>
>>
>>
>> On 12/12/2016 21:36, Milamber wrote:
>>> Hello,
>>>
>>> My vote +1 (binding)
>>>
>>> Tests are passed on a virtual topology of servers  (CS over CS)
>>> (1mgr+2nodes+1nfs) :
>>>
>>> 1/ Fresh install of 4.9.1.0 RC1 (adv net) on Ubuntu 14.04.5 + KVM +
>>> NFS : OK
>>> Some standard tests with success (create vm, migration, HA, create
>>> networks, create user, create ssh key, destroy vm, register template,
>>> create snapshot, restore snapshot, create template, ip association, ip
>>> release, static nat, firewall rule)
>>> Some tests with cloudstack ansible module with success too (create
>>> network, register templates, create vm, ip, firewall rule)
>>>
>>> 2/ Test upgrade from 4.8.1 to 4.9.1.0 RC1 : OK
>>>
>>> 3/ Test upgrade from 4.8.2 RC1 to 4.9.1.0 RC1 : don't works (expected)
>>>
>>> 4/ Tests of all localization Web UI for 4.9.1.0 RC1:
>>> Localization works well except Spanish (not a blocker to release): the
>>> Web UI display partially the localization strings due of one bad
>>> carriage return in the label
>>> message.installWizard.copy.whatIsCloudStack string (from Transifex).
>>> This is the same issue that the 4.8.2 RC1.
>>>
>>>
>>> Perhaps add in the Release notes this Spanish l10n issue.
>>>
>>> Thanks to the RM.
>>>
>>> Milamber
>>>
>>>
>>>
>>> On 10/12/2016 03:11, Rohit 

[GitHub] cloudstack issue #1726: CLOUDSTACK-9560 Root volume of deleted VM left unrem...

2016-12-15 Thread yvsubhash
Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
@ustcweizhou  Volume snapshots would be left over even in case of normal vm 
destroy and that is expected. They can be used if there is a need to restore 
the volume at a later point in time.  The snapshots are visible under 
storage->snapshots. So i dont see a need to clean up those snapshots


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1829: CLOUDSTACK-9363: Fix HVM VM restart bug in XenServer

2016-12-15 Thread koushik-das
Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@syed Please check #672. There was a discussion sometimes back on dev@ and 
this PR was mentioned. I feel that fix is slightly better than removing the 
check altogether.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)

2016-12-15 Thread Boris Stoyanov
Hello, 

It turned out it’s an issue with the way the upgrade was performed, I’ve run 
the cloudstack-setup-databases on top of 4.9 code base and it basically created 
the 4.9 schema, the when I’ve imported the backups on DB level it didn’t 
overwrite the 4.9 changes and they were still there with DB version of 4.5.2.2. 
So that’s how the upgrade got messed, I’ve just verified it’s not happening if 
I cleanup the DBs before importing cloud db. 

I’ll proceed with rest of the upgrade paths now and will keep you posted.

Thanks,
Boris Stoyanov


boris.stoya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

> On Dec 15, 2016, at 10:54 AM, Milamber  wrote:
> 
> Hi Rohit,
> 
> Thanks for the good way to fix this issue.
> 
> I've tested too the upgrade from 4.9.1.0 RC1 to 4.10.0.0 snapshot with 
> success now.
> 
> Milamber
> 
> On 15/12/2016 06:05, Rohit Yadav wrote:
>> Hi Bruno,
>> 
>> 
>> I checked the issue, the PR #1615 was merged only on master. Since, we've 
>> not started release work for 4.10/master yet, I'll move the changes to 
>> appropriate sql schema. This is not a blocker for either 4.8.2.0 or 4.9.1.0 
>> as the changes/feature don't exist on them yet.
>> 
>> Thanks again for sharing this, I'll fix them today.
>> 
>> 
>> All, since there is no blocker please continue with your tests and voting.
>> 
>> 
>> Regards.
>> 
>> 
>> From: Milamber 
>> Sent: 14 December 2016 00:32:19
>> To: dev@cloudstack.apache.org
>> Subject: Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)
>> 
>> 
>> Hello,
>> 
>> I'm not sure, but perhaps we have an 'indirect' blocker for 4.9.1.0 RC1.
>> 
>> Currently the upgrade from 4.9.1.0 RC1 to 4.10.0.0 SNAPSHOT don't works
>> because the schema-481to490.sql in the 4.9.1.0 / 4.9 doesn't contains
>> the commit of CLOUDSTACK-9438 (2e77496601ab5420723ce8b955b3960faaba7d5c).
>> (this commit is currently in master)
>> 
>> When you try to make the upgrade you have this error: "Unknown column
>> 'image_store_details.display' in 'field list'"
>> see:  https://issues.apache.org/jira/browse/CLOUDSTACK-9671
>> 
>> Currently there have 2 diff between the schema-481to490.sql in 4.9.1.0
>> RC1 (and 4.9 branch) and master branch.
>> 
>> What is your opinion: blocker or not?
>> 
>> Just copy the 2 sql request inside the schema-4910to41000.sql file (but
>> the commit 2e77496601ab5420723ce8b955b3960faaba7d5c contains other
>> modified files)?
>> 
>> Milamber
>> 
>> 
>> 
>> $ diff ./setup/db/db/schema-481to490.sql /tmp/MASTER-schema-481to490.sql
>> 547a548,552
>>  >
>>  > ALTER TABLE `cloud`.`image_store_details` CHANGE COLUMN `value`
>> `value` VARCHAR(255) NULL DEFAULT NULL COMMENT 'value of the detail',
>> ADD COLUMN `display` tinyint(1) NOT
>>  > NULL DEFAULT '1' COMMENT 'True if the detail can be displayed to the
>> end user' AFTER `value`;
>>  >
>>  > ALTER TABLE `snapshots` ADD COLUMN `location_type` VARCHAR(32)
>> COMMENT 'Location of snapshot (ex. Primary)';
>> 
>> 
>> 
>> 
>> 
>> On 12/12/2016 21:36, Milamber wrote:
>>> Hello,
>>> 
>>> My vote +1 (binding)
>>> 
>>> Tests are passed on a virtual topology of servers  (CS over CS)
>>> (1mgr+2nodes+1nfs) :
>>> 
>>> 1/ Fresh install of 4.9.1.0 RC1 (adv net) on Ubuntu 14.04.5 + KVM +
>>> NFS : OK
>>> Some standard tests with success (create vm, migration, HA, create
>>> networks, create user, create ssh key, destroy vm, register template,
>>> create snapshot, restore snapshot, create template, ip association, ip
>>> release, static nat, firewall rule)
>>> Some tests with cloudstack ansible module with success too (create
>>> network, register templates, create vm, ip, firewall rule)
>>> 
>>> 2/ Test upgrade from 4.8.1 to 4.9.1.0 RC1 : OK
>>> 
>>> 3/ Test upgrade from 4.8.2 RC1 to 4.9.1.0 RC1 : don't works (expected)
>>> 
>>> 4/ Tests of all localization Web UI for 4.9.1.0 RC1:
>>> Localization works well except Spanish (not a blocker to release): the
>>> Web UI display partially the localization strings due of one bad
>>> carriage return in the label
>>> message.installWizard.copy.whatIsCloudStack string (from Transifex).
>>> This is the same issue that the 4.8.2 RC1.
>>> 
>>> 
>>> Perhaps add in the Release notes this Spanish l10n issue.
>>> 
>>> Thanks to the RM.
>>> 
>>> Milamber
>>> 
>>> 
>>> 
>>> On 10/12/2016 03:11, Rohit Yadav wrote:
 Hi All,
 
 I've created a 4.9.1.0 release, with the following artifacts up for a
 vote:
 
 Git Branch and Commit SH:
 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.9.1.0-RC20161210T0838
 
 Commit: af2679959b634d095b93b8265c6da294d360065d
 
 List of changes:
 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.9.1.0-RC20161210T0838
 
 
 Source release (checksums and signatures are available at the same
 location):
 

Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)

2016-12-15 Thread Milamber

Hi Rohit,

Thanks for the good way to fix this issue.

I've tested too the upgrade from 4.9.1.0 RC1 to 4.10.0.0 snapshot with 
success now.


Milamber

On 15/12/2016 06:05, Rohit Yadav wrote:

Hi Bruno,


I checked the issue, the PR #1615 was merged only on master. Since, we've not 
started release work for 4.10/master yet, I'll move the changes to appropriate 
sql schema. This is not a blocker for either 4.8.2.0 or 4.9.1.0 as the 
changes/feature don't exist on them yet.

Thanks again for sharing this, I'll fix them today.


All, since there is no blocker please continue with your tests and voting.


Regards.


From: Milamber 
Sent: 14 December 2016 00:32:19
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.9.1.0 (RC1)


Hello,

I'm not sure, but perhaps we have an 'indirect' blocker for 4.9.1.0 RC1.

Currently the upgrade from 4.9.1.0 RC1 to 4.10.0.0 SNAPSHOT don't works
because the schema-481to490.sql in the 4.9.1.0 / 4.9 doesn't contains
the commit of CLOUDSTACK-9438 (2e77496601ab5420723ce8b955b3960faaba7d5c).
(this commit is currently in master)

When you try to make the upgrade you have this error: "Unknown column
'image_store_details.display' in 'field list'"
see:  https://issues.apache.org/jira/browse/CLOUDSTACK-9671

Currently there have 2 diff between the schema-481to490.sql in 4.9.1.0
RC1 (and 4.9 branch) and master branch.

What is your opinion: blocker or not?

Just copy the 2 sql request inside the schema-4910to41000.sql file (but
the commit 2e77496601ab5420723ce8b955b3960faaba7d5c contains other
modified files)?

Milamber



$ diff ./setup/db/db/schema-481to490.sql /tmp/MASTER-schema-481to490.sql
547a548,552
  >
  > ALTER TABLE `cloud`.`image_store_details` CHANGE COLUMN `value`
`value` VARCHAR(255) NULL DEFAULT NULL COMMENT 'value of the detail',
ADD COLUMN `display` tinyint(1) NOT
  > NULL DEFAULT '1' COMMENT 'True if the detail can be displayed to the
end user' AFTER `value`;
  >
  > ALTER TABLE `snapshots` ADD COLUMN `location_type` VARCHAR(32)
COMMENT 'Location of snapshot (ex. Primary)';





On 12/12/2016 21:36, Milamber wrote:

Hello,

My vote +1 (binding)

Tests are passed on a virtual topology of servers  (CS over CS)
(1mgr+2nodes+1nfs) :

1/ Fresh install of 4.9.1.0 RC1 (adv net) on Ubuntu 14.04.5 + KVM +
NFS : OK
Some standard tests with success (create vm, migration, HA, create
networks, create user, create ssh key, destroy vm, register template,
create snapshot, restore snapshot, create template, ip association, ip
release, static nat, firewall rule)
Some tests with cloudstack ansible module with success too (create
network, register templates, create vm, ip, firewall rule)

2/ Test upgrade from 4.8.1 to 4.9.1.0 RC1 : OK

3/ Test upgrade from 4.8.2 RC1 to 4.9.1.0 RC1 : don't works (expected)

4/ Tests of all localization Web UI for 4.9.1.0 RC1:
Localization works well except Spanish (not a blocker to release): the
Web UI display partially the localization strings due of one bad
carriage return in the label
message.installWizard.copy.whatIsCloudStack string (from Transifex).
This is the same issue that the 4.8.2 RC1.


Perhaps add in the Release notes this Spanish l10n issue.

Thanks to the RM.

Milamber



On 10/12/2016 03:11, Rohit Yadav wrote:

Hi All,

I've created a 4.9.1.0 release, with the following artifacts up for a
vote:

Git Branch and Commit SH:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.9.1.0-RC20161210T0838

Commit: af2679959b634d095b93b8265c6da294d360065d

List of changes:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.9.1.0-RC20161210T0838


Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.9.1.0/

PGP release keys (signed using 0EE3D884):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Vote will be open for 120 hours, considering the process started
during the
weekends, and will end on 14 Dec 2016 end of the day.

For sanity in tallying the vote, can PMC members please be sure to
indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Regards.





rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue