[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877660#comment-15877660
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
tested. LGTM

btw, you can use "git push --force" to overwrite the code


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9786) API reference guide entry for associateIpAddress needs a fix

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877624#comment-15877624
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9786:


GitHub user Ashadeepa opened a pull request:

https://github.com/apache/cloudstack/pull/1959

CLOUDSTACK-9786:API reference guide entry for associateIpAddress needs 
additional information

Going through the code & implementation, it seems like either of the 
parameters are not required while accessing the API : associateIpAddress.
There are 3 cases for which this api works. 1) networkId 2) vpcId 3) 
zoneId. Either of these can be provided to achieve the same functionality. If 
neither of them is provided, there is an error text shown.
E.g.
[root@CCP ~]# curl -s 
'http://10.66.43.37:8096/client/api?command=associateIpAddress&listall=true' | 
xmllint --format - -o


431
4350
Unable to figure out zone to assign ip to. Please specify either 
zoneId, or networkId, or vpcId in the call

Modify the API reference guide entry with this detail in the "description"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9786

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1959.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1959


commit 030d34dca89621965afa2043a78a165a21adc26e
Author: Ashadeepa Debnath 
Date:   2017-02-21T11:29:02Z

CLOUDSTACK-9786:API reference guide entry for associateIpAddress needs a fix




> API reference guide entry for associateIpAddress needs a fix
> 
>
> Key: CLOUDSTACK-9786
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9786
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Ashadeepa Debnath
>
> Going through the code & implementation, it seems like either of the 
> parameters are not required while accessing the API : associateIpAddress.
> There are 3 cases for which this api works. 1) networkId 2) vpcId 3) zoneId. 
> Either of these can be provided to achieve the same functionality. If neither 
> of them is provided, there is an error text shown.
> E.g.
> [root@CCP ~]# curl -s 
> 'http://10.66.43.37:8096/client/api?command=associateIpAddress&listall=true' 
> | xmllint --format - -o
> 
> 
> 431
> 4350
> Unable to figure out zone to assign ip to. Please specify either 
> zoneId, or networkId, or vpcId in the call
> 
> Modify the API reference guide entry with this detail in the "description"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-5806) Storage types other than NFS/VMFS can't overprovision

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877600#comment-15877600
 ] 

ASF GitHub Bot commented on CLOUDSTACK-5806:


GitHub user abhinandanprateek opened a pull request:

https://github.com/apache/cloudstack/pull/1958

CLOUDSTACK-5806: add presetup to storage types that support over prov…

…isioning

Ideally this should be configurable via global settings

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack CLOUDSTACK-5806

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1958.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1958


commit 6ad3429085abf2943ff3183288b7f2e7e0165963
Author: Abhinandan Prateek 
Date:   2017-02-22T06:48:35Z

CLOUDSTACK-5806: add presetup to storage types that support over 
provisioning
Ideally this should be configurable via global settings




> Storage types other than NFS/VMFS can't overprovision
> -
>
> Key: CLOUDSTACK-5806
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5806
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0, 4.3.0, Future
>Reporter: Marcus Sorensen
>Assignee: edison su
>Priority: Critical
> Fix For: 4.4.0
>
>
> Edison, Mike, or myself can probably fix this. Mgmt server hardcodes storage 
> types that can overprovision. Need to fix this.
> Edison suggests:
> We can move it to storage driver's capabilities method.
> Each storage driver can report its capabilities in DataStoreDriver-> 
> getCapabilities(), which returns a map[String, String], we can change the 
> signature to map[String, Object]
> In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)-> 
> getCapabilities, which can return something like:
> Var comparator = new  storageOverProvision() {
> Public Boolean isOverProvisionSupported(DataStore store) {
>Var storagepool = (PrimaryDataStoreInfo)store;
>If (store.getPoolType() == NFS or VMFS) {
>Return true;
>   }
>  };
> };
> Var caps = new HashMap[String, Object]();
> Caps.put("storageOverProvision", comparator);
> Return caps;
> }
> Whenever, other places in mgt server want to check the capabilities of 
> overprovision, we can do the following:
> Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
> var caps = primaryStore. getDriver().getCapabilities();
> var overprovision = caps.get("storageOverProvision");
> var result = overprovision. isOverProvisionSupported(primaryStore);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877585#comment-15877585
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
Trillian test result (tid-876)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35807 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1935-t876-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_network.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 864.13 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 320.45 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 160.52 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 61.11 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.72 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 287.25 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 545.04 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 512.25 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1414.74 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 548.99 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1297.58 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.41 | test_volumes.py
test_08_resize_volume | Success | 156.44 | test_volumes.py
test_07_resize_fail | Success | 156.52 | test_volumes.py
test_06_download_detached_volume | Success | 156.34 | test_volumes.py
test_05_detach_volume | Success | 155.91 | test_volumes.py
test_04_delete_attached_volume | Success | 151.44 | test_volumes.py
test_03_download_attached_volume | Success | 151.32 | test_volumes.py
test_02_attach_volume | Success | 95.17 | test_volumes.py
test_01_create_volume | Success | 711.28 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.17 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.78 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.76 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 247.75 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.64 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.94 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.84 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.87 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.46 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 40.43 | test_templates.py
test_10_destroy_cpvm | Success | 166.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.56 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.57 | test_ssvm.py
test_07_reboot_ssvm | Success | 163.59 | test_ssvm.py
test_06_stop_cpvm | Success | 132.19 | test_ssvm.py
test_05_stop_ssvm | Success | 164.02 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 3.42 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.11 | test_snapshots.py
test_04_change_offering_small | Success | 210.27 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test

[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877515#comment-15877515
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user Ashadeepa closed the pull request at:

https://github.com/apache/cloudstack/pull/1910


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877517#comment-15877517
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@rafaelweingartner : This is due to the change in my remote urls. Closing 
the old PR https://github.com/apache/cloudstack/issues/1910.


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877481#comment-15877481
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@rafaelweingartner I think I got your point, I tried to keep code as 
similar as it was before, by declaring `rollBackState` as static class variable 
on line 114. This way inner `finally` block would work the same as before when 
one of new methods set `rollBackState = true.` On outter `finally` block, 
`rollBackState` is set to false (line 345), this way each time `deleteDomain` 
is invoked it would start on false (maybe it would be easier to move it at the 
beggining of `deleteDomain`). Do you agree with this approach?


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke

[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877460#comment-15877460
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
Trillian test result (tid-877)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 26001 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1953-t877-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 303.85 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.03 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 134.10 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 55.78 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 210.11 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 242.58 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 466.50 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 479.09 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1366.03 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 521.09 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 728.22 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1246.26 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.10 | test_volumes.py
test_08_resize_volume | Success | 156.06 | test_volumes.py
test_07_resize_fail | Success | 156.10 | test_volumes.py
test_06_download_detached_volume | Success | 155.99 | test_volumes.py
test_05_detach_volume | Success | 150.56 | test_volumes.py
test_04_delete_attached_volume | Success | 145.93 | test_volumes.py
test_03_download_attached_volume | Success | 155.98 | test_volumes.py
test_02_attach_volume | Success | 83.88 | test_volumes.py
test_01_create_volume | Success | 620.53 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.19 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.62 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 128.64 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 252.01 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.37 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.17 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.63 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.08 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 130.68 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.64 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.10 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.23 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.36 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 5.10 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.07 | test_templates.py
test_01_create_template | Success | 25.25 | test_templates.py
test_10_destroy_cpvm | Success | 161.28 | test_ssvm.py
test_09_destroy_ssvm | Success | 133.21 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.19 | test_ssvm.py
test_07_reboot_ssvm | Success | 102.98 | test_ssvm.py
test_06_stop_cpvm | Success | 131.35 | test_ssvm.py
test_05_stop_ssvm | Success | 133.05 | test_ssvm.py
test_04_cpvm_internals | Success | 0.98 | test_ssvm.py
test_03_ssvm_internals | Success | 2.85 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.10 | test_ssvm.py
test_01_snapshot_root_disk | Success | 10.83 | test_snapshots.py
test_04_change_offering_small | Success | 204.33 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.

[jira] [Commented] (CLOUDSTACK-9709) DHCP/DNS offload: Use correct thread pool for IP fetch task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877445#comment-15877445
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9709:


Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1873
  
There are not failed test cases on CI run.


> DHCP/DNS offload: Use correct thread pool for IP fetch task
> ---
>
> Key: CLOUDSTACK-9709
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9709
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.10.0.0
>
>
> Currently IP fetch task is using the same thread pool as VM expunge. This can 
> lead to confusion
> Also the IP fetch task uses another thread pool (name of the variable 
> _vmIpFetchThreadExecutor) which is not initialized. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877011#comment-15877011
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9698:


Github user sateesh-chodapuneedi commented on the issue:

https://github.com/apache/cloudstack/pull/1861
  
ping @karuturi @koushik-das 


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876950#comment-15876950
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102342529
  
--- Diff: 
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java 
---
@@ -584,18 +584,36 @@ public void defFileBasedDisk(String filePath, String 
diskLabel, DiskBus bus, Dis
 
 /* skip iso label */
 private String getDevLabel(int devId, DiskBus bus) {
--- End diff --

Would be great to have unit tests for either `getDevLabel(int devId, 
DiskBus bus)` or `getDevLabelSuffix(int deviceIndex)`, especially to test for 
the expected results when `devId` (or `deviceIndex`) are high enough to return 
a double-letter device label suffix.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876885#comment-15876885
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102336557
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
--- End diff --

@sureshanaparti If the condition really should be `i < 
maxDataVolumesSupported` (which would make the maximum device id returned by 
the method be `maxDataVolumesSupported - 1`), then the check + error message 
above
```
if (deviceId.longValue() <= 0 || deviceId.longValue() > 
maxDataVolumesSupported || deviceId.longValue() == 3) {
throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
```
need to be changed so as not to include the value of 
`maxDataVolumesSupported` itself.
Otherwise, when `maxDataVolumesSupported` has value `6` (for example), the 
method would not ever return `6` when parameter `deviceId` is specified as 
`null` but would return `6` when parameter `deviceId` is specified as `6` 
(assuming device id `6` is not already in use).  Also, the error message would 
state "deviceId should be 1,2,4-6" whenever parameter `deviceId` would be 
specified as an invalid value, which would not be correct (as `5` should be the 
highest valid device id).


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodIn

[jira] [Assigned] (CLOUDSTACK-8608) Fix unpleasant admin experience with VMware fresh installs/upgrades - System VM's failed to start due to permissions issue

2017-02-21 Thread Suresh Kumar Anaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Kumar Anaparti reassigned CLOUDSTACK-8608:
-

Assignee: Suresh Kumar Anaparti  (was: Likitha Shetty)

> Fix unpleasant admin experience with VMware fresh installs/upgrades - System 
> VM's failed to start due to permissions issue
> --
>
> Key: CLOUDSTACK-8608
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8608
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Likitha Shetty
>Assignee: Suresh Kumar Anaparti
> Fix For: Future
>
>
> VMware uses a folder in machine where management server is running to mount 
> secondary storage. This is a bootstrap phase to start system vm, because 
> unlike KVM, Xenserver, management server cannot directly access VMWare ESXI 
> host to download systemvm template from secondary storage to primary storage. 
> The secondary storage is usually managed by SSVM that uses root account to 
> download templates. However, management server is using account 'cloud' to 
> manipulate templates after secondary storage is mounted. After admin 
> registers new systemvm template in CS as a normal upgrade procedure, the old 
> SSVM will download the template using account root, but management server 
> will create new SSVM from the new template using account 'cloud'. Then a 
> permission denied error will raise.
> Prior to 4.4, CS used to handle this by running 'chmod -R' to the folder to 
> which secondary storage is mounted every time management server mounts 
> secondary storage. Unfortunately, this method is slow because we  are trying 
> to give permissions to the entire folder. So in 4.4, we stopped automatically 
> providing the permissions and asked admin to manually run 'chmod -R' to the 
> folder 'templates' on secondary storage, after registering new systemvm 
> template.
> We can avoid this manual admin step by only providing permissions for the 
> /templates folder instead of the entire folder. This way we will avoid the 
> snapshots folder which could be very large in upgrade setups.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8608) Fix unpleasant admin experience with VMware fresh installs/upgrades - System VM's failed to start due to permissions issue

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876865#comment-15876865
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8608:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
@sateesh-chodapuneedi @rhtyd  Please review the changes.


> Fix unpleasant admin experience with VMware fresh installs/upgrades - System 
> VM's failed to start due to permissions issue
> --
>
> Key: CLOUDSTACK-8608
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8608
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Likitha Shetty
>Assignee: Likitha Shetty
> Fix For: Future
>
>
> VMware uses a folder in machine where management server is running to mount 
> secondary storage. This is a bootstrap phase to start system vm, because 
> unlike KVM, Xenserver, management server cannot directly access VMWare ESXI 
> host to download systemvm template from secondary storage to primary storage. 
> The secondary storage is usually managed by SSVM that uses root account to 
> download templates. However, management server is using account 'cloud' to 
> manipulate templates after secondary storage is mounted. After admin 
> registers new systemvm template in CS as a normal upgrade procedure, the old 
> SSVM will download the template using account root, but management server 
> will create new SSVM from the new template using account 'cloud'. Then a 
> permission denied error will raise.
> Prior to 4.4, CS used to handle this by running 'chmod -R' to the folder to 
> which secondary storage is mounted every time management server mounts 
> secondary storage. Unfortunately, this method is slow because we  are trying 
> to give permissions to the entire folder. So in 4.4, we stopped automatically 
> providing the permissions and asked admin to manually run 'chmod -R' to the 
> folder 'templates' on secondary storage, after registering new systemvm 
> template.
> We can avoid this manual admin step by only providing permissions for the 
> /templates folder instead of the entire folder. This way we will avoid the 
> snapshots folder which could be very large in upgrade setups.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8608) Fix unpleasant admin experience with VMware fresh installs/upgrades - System VM's failed to start due to permissions issue

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876864#comment-15876864
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8608:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
@rhtyd Thanks for running tests. The test failures/errors above are failing 
in other PRs as well, not related to the changes in this PR.


> Fix unpleasant admin experience with VMware fresh installs/upgrades - System 
> VM's failed to start due to permissions issue
> --
>
> Key: CLOUDSTACK-8608
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8608
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Likitha Shetty
>Assignee: Likitha Shetty
> Fix For: Future
>
>
> VMware uses a folder in machine where management server is running to mount 
> secondary storage. This is a bootstrap phase to start system vm, because 
> unlike KVM, Xenserver, management server cannot directly access VMWare ESXI 
> host to download systemvm template from secondary storage to primary storage. 
> The secondary storage is usually managed by SSVM that uses root account to 
> download templates. However, management server is using account 'cloud' to 
> manipulate templates after secondary storage is mounted. After admin 
> registers new systemvm template in CS as a normal upgrade procedure, the old 
> SSVM will download the template using account root, but management server 
> will create new SSVM from the new template using account 'cloud'. Then a 
> permission denied error will raise.
> Prior to 4.4, CS used to handle this by running 'chmod -R' to the folder to 
> which secondary storage is mounted every time management server mounts 
> secondary storage. Unfortunately, this method is slow because we  are trying 
> to give permissions to the entire folder. So in 4.4, we stopped automatically 
> providing the permissions and asked admin to manually run 'chmod -R' to the 
> folder 'templates' on secondary storage, after registering new systemvm 
> template.
> We can avoid this manual admin step by only providing permissions for the 
> /templates folder instead of the entire folder. This way we will avoid the 
> snapshots folder which could be very large in upgrade setups.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9795) VRs used as VPC Routers have logrotate in cron.daily instead of cron.hourly

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876860#comment-15876860
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9795:


Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1954
  
tag:mergeready



> VRs used as VPC Routers have logrotate in cron.daily instead of cron.hourly
> ---
>
> Key: CLOUDSTACK-9795
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9795
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VPC
>Affects Versions: 4.8.0, 4.9.0, 4.8.1.1, 4.9.0.1
> Environment: The VR when deploy as a stand alone router has logrotate 
> moved from cron.daily to cron.hourly, but when used as a VPC router 
> (vpcrouter) the logrotate file is left in cron.daily.
>Reporter: David Mabry
>Priority: Minor
>  Labels: easyfix
>
> The VR when deploy as a stand alone router has logrotate moved from 
> cron.daily to cron.hourly, but when used as a VPC router (vpcrouter) the 
> logrotate file is left in cron.daily.
> This causes anyone using the VR as a VPC to have issues with /var/log filling 
> up and causing the VR to fail ungracefully.  This prevents VMs from spinning 
> up in the VPC, adding new acls and creating new LB rules.
> The fix is to move the logrotate from cron.daily to cron.hourly and configure 
> cloud.log to rotate and compress at 20MB.  I will submit a PR against master 
> that will move logrotate.  It is a minor change to cloud-early on the VRs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876852#comment-15876852
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@remibergsma Same MAC for RVR has been re-introducted as part of 
[CLOUDSTACK-985](https://issues.apache.org/jira/browse/CLOUDSTACK-985). It 
confirms that peer NICs of RVRs should have same MAC addresses. Only default 
public NIC was configured with same MAC. For VMware, there exists additional 
public NICs which were not configured with same MAC addresses.


> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> *r-43585-VM (Backup)*
> 6: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 02:00:5a:e6:6c:d5 brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> CloudStack should ensure that the NICs for all public networks have matching 
> MACs.
> REPRO STEPS
> ==
> 1) Set up redundant VR.
> 2) Set up multiple public networks on different VLANs.
> 3) Acquire IPs in the RVR network until the VRs get IPs in the different 
> public networks.
> 4) Confirm the mismatching MAC addresses.
> EXPECTED BEHAVIOR
> ==
> Redundant VRs have matching MACs for all public networks.
> ACTUAL BEHAVIOR
> ==
> Redundant VRs have matching MACs only for the first public network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9746) system-vm: logrotate config causes critical failures

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876841#comment-15876841
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9746:


Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
@serbaut Can you do a force push to kick off jenkins again.  I'm guessing 
Jenkins just had an issue and not the PR.


> system-vm: logrotate config causes critical failures
> 
>
> Key: CLOUDSTACK-9746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9746
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM
>Affects Versions: 4.8.0, 4.9.0
>Reporter: Joakim Sernbrant
>Priority: Critical
>
> CLOUDSTACK-6885 changed logrotate from time based to size based. This means 
> that logs will grow up to its size times two (due to delaycompress).
> For example:
> 50M auth.log
> 50M auth.log.1
> 10M cloud.log
> 10M cloud.log.1
> 50M cron.log
> 50M cron.log.1
> 50M messages
> 50M messages.1
> ...
> Some files will grow slowly but eventually they will get to their max size. 
> The total allowed log size with the current config is well beyond the size of 
> the log partition.
> Having a full /dev/log puts the VR in a state where operations on it 
> critically fails.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876803#comment-15876803
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@nvazquez great work.
However, there is a catch there that I think you might have overlooked. 
This problem is caused by the method extraction I suggested.

If you take a look at the code before the extraction, every time that an 
exception is thrown, the code was setting the variable `rollBackState = true`. 
This happens at lines 287, 305, and 313. Now that the code was extracted, 
setting those variables to `true` does not work anymore, because of the context 
those variables are declared change.

In my opinion, this code was kind of weird before. It was throwing an 
exception that is caught right away and setting a control variable to be 
executed on `finally` block. The only reason I see for this is that if other 
exceptions that are not the ones generated at lines 292, 310, and 325 happen, 
and we do not want to execute the rollback for them. However, this seems error 
prone, leading to database inconsistencies.

I would change the "rollback" code (lines 342-345) to the catch block.

I do not know if I have been clear, we can discuss this further. I may have 
overlooked some bits of it as well (it is a quite complicated bit of code).



> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proce

[jira] [Commented] (CLOUDSTACK-9719) [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876776#comment-15876776
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9719:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@sateesh-chodapuneedi @rhtyd  Please review the code changes.


> [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery
> --
>
> Key: CLOUDSTACK-9719
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9719
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> After HA being triggered on VMware, some VMs fail to acquire DHCP address 
> from a VR. These VMs are live migrated as part of vCenter HA to another 
> available host before the VR and couldn't acquire DHCP address as VR is not 
> migrated yet and these VMs request failed to reach the VR.
> Resolving this requires manual intervention by the CloudStack administrator; 
> the router must be rebooted or the network restarted. This behavior is not 
> ideal and will prolong downtime caused by an HA event and there is no point 
> for the non-functional virtual router to even be running. CloudStack should 
> handle this situation by setting VR restart priority to high in the vCenter 
> when HA is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9719) [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876774#comment-15876774
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9719:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@rhtyd Thanks for running these test. The failures/errors are not related 
to this PR changes.


> [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery
> --
>
> Key: CLOUDSTACK-9719
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9719
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> After HA being triggered on VMware, some VMs fail to acquire DHCP address 
> from a VR. These VMs are live migrated as part of vCenter HA to another 
> available host before the VR and couldn't acquire DHCP address as VR is not 
> migrated yet and these VMs request failed to reach the VR.
> Resolving this requires manual intervention by the CloudStack administrator; 
> the router must be rebooted or the network restarted. This behavior is not 
> ideal and will prolong downtime caused by an HA event and there is no point 
> for the non-functional virtual router to even be running. CloudStack should 
> handle this situation by setting VR restart priority to high in the vCenter 
> when HA is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9733) Concurrent volume snapshots of a VM are not allowed and are not limited per host as per the global configuration parameter "concurrent.snapshots.threshold.perhost"

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876749#comment-15876749
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9733:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@koushik-das @kishankavala  Please review the changes.


> Concurrent volume snapshots of a VM are not allowed and are not limited per 
> host as per the global configuration parameter 
> "concurrent.snapshots.threshold.perhost".
> 
>
> Key: CLOUDSTACK-9733
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9733
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot, Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> Pre-CloudStack 4.4.0, before the VM job framework changes (CLOUDSTACK-669), 
> Concurrent volume (both root and data) snapshots were allowed per host based 
> on the value of global config "concurrent.snapshots.threshold.perhost". The 
> volumes could belong to the same VM or spread across multiple VMs on a given 
> host. The synchronisation was done based on the host (Id).
> As part of the VM job framework changes (CLOUDSTACK-669) in CloudStack 4.4.0, 
> a separate job queue was introduced for individual VMs with a concurrency 
> level of 1 (i.e. all operations to a given VM are serialized). Volume 
> snapshot was also considered as a VM operation as part of these changes  and 
> goes through the VM job queue. These changes made the config 
> "concurrent.snapshots.threshold.perhost" obsolete (it was also no longer 
> getting honoured, since there is no single point of enforcement).
> Only one volume snapshot of a VM is allowed at any given point of time as the 
> sync object is the VM (id). So concurrent volume snapshots of a VM are not 
> allowed and are not limited per host as per the global configuration 
> parameter "concurrent.snapshots.threshold.perhost".
> This functionality needs to be re-introduced to execute more than 1 snapshot 
> of a VM at a time (when the underlying hypervisor supports) and snapshots 
> should be limited per host based on the value of 
> "concurrent.snapshots.threshold.perhost" at the cluster level (for more 
> flexibility).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9607) Preventing template deletion when template is in use.

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876737#comment-15876737
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9607:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
@jburwell That was default behavior for few years to allow deletion of the 
template even if active VMs exist. Deletion of the template on secondary 
doesn't remove the template copy on primary storage so all existing VM function 
work just fine. 
From my prospective if we allow forced deletion from the UI I am fine with 
switching default to forced=no and documenting it in Release Notes. 


> Preventing template deletion when template is in use.
> -
>
> Key: CLOUDSTACK-9607
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9607
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
>
> Consider this scenario:
> 1. User launches a VM from Template and keep it running
> 2. Admin logins and deleted that template [CloudPlatform does not check 
> existing / running VM etc. while the deletion is done]
> 3. User resets the VM
> 4. CloudPlatform fails to star the VM as it cannot find the corresponding 
> template.
> It throws error as 
> java.lang.RuntimeException: Job failed due to exception Resource [Host:11] is 
> unreachable: Host 11: Unable to start instance due to can't find ready 
> template: 209 for data center 1
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:113)
> at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:495)
> Client is requesting better handing of this scenario. We need to check 
> existing / running VM's when the template is deleted and warn admin about the 
> possible issue that may occur.
> REPRO STEPS
> ==
> 1. Launches a VM from Template and keep it running
> 2. Now delete that template 
> 3. Reset the VM
> 4. CloudPlatform fails to star the VM as it cannot find the corresponding 
> template.
> EXPECTED BEHAVIOR
> ==
> Cloud platform should throw some warning message while the template is 
> deleted if that template is being used by existing / running VM's
> ACTUAL BEHAVIOR
> ==
> Cloud platform does not throw as waring etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876750#comment-15876750
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@rafaelweingartner thanks for reviewing! I extracted code to new methods 
and also added unit tests for them


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8

[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876745#comment-15876745
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@blueorangutan test


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876748#comment-15876748
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9733) Concurrent volume snapshots of a VM are not allowed and are not limited per host as per the global configuration parameter "concurrent.snapshots.threshold.perhost"

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876728#comment-15876728
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9733:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@ramkatru Checked and addressed.


> Concurrent volume snapshots of a VM are not allowed and are not limited per 
> host as per the global configuration parameter 
> "concurrent.snapshots.threshold.perhost".
> 
>
> Key: CLOUDSTACK-9733
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9733
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot, Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> Pre-CloudStack 4.4.0, before the VM job framework changes (CLOUDSTACK-669), 
> Concurrent volume (both root and data) snapshots were allowed per host based 
> on the value of global config "concurrent.snapshots.threshold.perhost". The 
> volumes could belong to the same VM or spread across multiple VMs on a given 
> host. The synchronisation was done based on the host (Id).
> As part of the VM job framework changes (CLOUDSTACK-669) in CloudStack 4.4.0, 
> a separate job queue was introduced for individual VMs with a concurrency 
> level of 1 (i.e. all operations to a given VM are serialized). Volume 
> snapshot was also considered as a VM operation as part of these changes  and 
> goes through the VM job queue. These changes made the config 
> "concurrent.snapshots.threshold.perhost" obsolete (it was also no longer 
> getting honoured, since there is no single point of enforcement).
> Only one volume snapshot of a VM is allowed at any given point of time as the 
> sync object is the VM (id). So concurrent volume snapshots of a VM are not 
> allowed and are not limited per host as per the global configuration 
> parameter "concurrent.snapshots.threshold.perhost".
> This functionality needs to be re-introduced to execute more than 1 snapshot 
> of a VM at a time (when the underlying hypervisor supports) and snapshots 
> should be limited per host based on the value of 
> "concurrent.snapshots.threshold.perhost" at the cluster level (for more 
> flexibility).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876721#comment-15876721
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-521


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876683#comment-15876683
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876677#comment-15876677
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@blueorangutan package


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876682#comment-15876682
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ct

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876675#comment-15876675
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan test


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is alr

[jira] [Commented] (CLOUDSTACK-8608) Fix unpleasant admin experience with VMware fresh installs/upgrades - System VM's failed to start due to permissions issue

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876671#comment-15876671
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8608:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
Trillian test result (tid-867)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 48400 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1875-t867-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 872.15 | 
test_privategw_acl.py
test_02_redundant_VPC_default_routes | `Error` | 237.39 | 
test_vpc_redundant.py
test_02_list_snapshots_with_removed_data_store | `Error` | 75.67 | 
test_snapshots.py
test_02_list_snapshots_with_removed_data_store | `Error` | 80.75 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 375.63 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 186.40 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 596.74 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 368.38 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 750.75 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 665.98 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1527.05 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 691.66 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1373.64 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.62 | test_volumes.py
test_06_download_detached_volume | Success | 70.58 | test_volumes.py
test_05_detach_volume | Success | 100.22 | test_volumes.py
test_04_delete_attached_volume | Success | 15.17 | test_volumes.py
test_03_download_attached_volume | Success | 25.26 | test_volumes.py
test_02_attach_volume | Success | 58.86 | test_volumes.py
test_01_create_volume | Success | 519.45 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 534.17 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.17 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 227.10 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 191.26 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.62 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 216.81 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.68 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.18 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 101.03 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.14 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.10 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.11 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 241.44 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 10.16 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 120.87 | test_templates.py
test_10_destroy_cpvm | Success | 236.61 | test_ssvm.py
test_09_destroy_ssvm | Success | 268.53 | test_ssvm.py
test_08_reboot_cpvm | Success | 366.63 | test_ssvm.py
test_07_reboot_ssvm | Success | 308.41 | test_ssvm.py
test_06_stop_cpvm | Success | 176.51 | test_ssvm.py
test_05_stop_ssvm | Success | 173.30 | test_ssvm.py
test_04_cpvm_internals | Success | 1.02 | test_ssvm.py
test_03_ssvm_internals | Success | 3.32 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py

[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876657#comment-15876657
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102310842
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I thank you


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876668#comment-15876668
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@borisstoyanov Can you kick off Jenkins job on this PR.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876665#comment-15876665
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102311697
  
--- Diff: 
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java 
---
@@ -716,11 +734,6 @@ public DiskFmtType getDiskFormatType() {
 return _diskFmtType;
 }
 
--- End diff --

Removed unused method _getDiskSeq()_.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876664#comment-15876664
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@remibergsma @borisstoyanov Updated the KVM code to generate the valid 
device name above id 25.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876655#comment-15876655
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102310032
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Updated, thanks for the input


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876653#comment-15876653
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102309898
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
--- End diff --

@rafaelweingartner when configuring getMaxDataVolumesSupported(vm) with 6 
for the hypervisor of the VM, the VM can have max 6 devices. 1 root (id 0), 1 
CD-ROM (id 3) and other 4 for extra disks/volumes.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876651#comment-15876651
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102309517
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Yes, I think this is more readable.



> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876646#comment-15876646
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102309146
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Do you think something like:

if (!result && work != null) {

would be better?  Even if work.getStep() did return a null, that should 
have the same effect as before.  Maybe it would be more readable too?


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9719) [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876644#comment-15876644
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9719:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
Trillian test result (tid-869)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 46839 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1879-t869-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 880.24 | 
test_privategw_acl.py
test_01_vpc_privategw_acl | `Failure` | 116.61 | test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 116.23 | 
test_snapshots.py
test_02_list_snapshots_with_removed_data_store | `Error` | 121.33 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 381.80 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 161.82 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 613.06 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 360.18 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 732.40 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 666.99 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1538.86 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 740.52 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 647.18 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1381.47 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 25.77 | test_volumes.py
test_06_download_detached_volume | Success | 60.63 | test_volumes.py
test_05_detach_volume | Success | 100.29 | test_volumes.py
test_04_delete_attached_volume | Success | 10.20 | test_volumes.py
test_03_download_attached_volume | Success | 15.50 | test_volumes.py
test_02_attach_volume | Success | 63.80 | test_volumes.py
test_01_create_volume | Success | 509.73 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 564.79 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.20 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 232.23 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 332.79 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.70 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 263.25 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.11 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 81.16 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 5.13 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.15 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.25 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.18 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 226.89 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.09 | test_templates.py
test_04_extract_template | Success | 10.46 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.19 | test_templates.py
test_01_create_template | Success | 126.04 | test_templates.py
test_10_destroy_cpvm | Success | 236.84 | test_ssvm.py
test_09_destroy_ssvm | Success | 269.03 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.81 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.59 | test_ssvm.py
test_06_stop_cpvm | Success | 206.88 | test_ssvm.py
test_05_stop_ssvm | Success | 183.88 | test_ssvm.py
test_04_cpvm_internals | Success | 1.20 | test_ssvm.py
test_03_ssvm_internals | Success | 3.47 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
t

[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876637#comment-15876637
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102308166
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I am not asking to make a distinction between exception or not. What I 
tried to say is that, if the intent/purpose of the `finally` block was only to 
revert the step to a previous state when exceptions occur, we could do that 
using a `catch` block. I think the finally here is meant to revert the state of 
work step even if an exception does not happen, for instance when ` 
stateTransitTo ` returns `false`.

I think you already answered my doubt; when you said that the ` 
previousStep ` is most likely never to be `null`. I thought we could have cases 
where ` previousStep == null`, and then if the ` stateTransitTo` returns false, 
with the newly added check at line 757, we would not update the step back to 
`null` for these cases.


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876621#comment-15876621
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102305779
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I'm not sure I'm following.  From reading the code a bit, it looks like the 
only scenario where a false would be returned from stateTransitTo would be 
where the state was not properly persisted to the db.  No exception is thrown 
in that case.  In the current code, false *or* exception will try to revert.  
Also, currently previousStep would *probably* never be null - though if work is 
null this will cause another NPE on line 747 of current code.  In the PR, 
previousStep will *probably* only be null in the case where work is null.

What is the value of making a distinction from a false versus an exception 
in this case?  Me adding a check for previousStep != null is simply to make 
sure that work is not null above.  I could also check work for null here too, 
but I don't think that's what you're getting at.


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9607) Preventing template deletion when template is in use.

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876613#comment-15876613
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9607:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
Trillian test result (tid-875)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 27492 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1773-t875-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 325.34 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 149.78 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.19 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 226.19 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 259.80 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 518.66 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 510.72 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1302.14 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 533.12 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 754.46 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1284.66 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.34 | test_volumes.py
test_08_resize_volume | Success | 156.35 | test_volumes.py
test_07_resize_fail | Success | 156.41 | test_volumes.py
test_06_download_detached_volume | Success | 156.27 | test_volumes.py
test_05_detach_volume | Success | 155.83 | test_volumes.py
test_04_delete_attached_volume | Success | 146.14 | test_volumes.py
test_03_download_attached_volume | Success | 156.23 | test_volumes.py
test_02_attach_volume | Success | 124.17 | test_volumes.py
test_01_create_volume | Success | 711.19 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.72 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.70 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 272.73 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.62 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.31 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.84 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.82 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.85 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.16 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.29 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 35.39 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.14 | test_templates.py
test_03_delete_template | Success | 5.10 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 35.38 | test_templates.py
test_10_destroy_cpvm | Success | 136.43 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.10 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.51 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.59 | test_ssvm.py
test_06_stop_cpvm | Success | 131.69 | test_ssvm.py
test_05_stop_ssvm | Success | 133.64 | test_ssvm.py
test_04_cpvm_internals | Success | 1.39 | test_ssvm.py
test_03_ssvm_internals | Success | 3.76 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.12 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.11 | test_snapshots.py
test_04_change_offering_small | Success | 239.57 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offering

[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876609#comment-15876609
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102303566
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
 devIds.add(String.valueOf(i));
 }
 devIds.remove("3");
--- End diff --

Thanks. Added this.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876608#comment-15876608
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102303488
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
--- End diff --

@HrWiggles. Noted, not considering it for now.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876606#comment-15876606
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102303229
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
--- End diff --

@HrWiggles Thanks for pointing this. Addressed.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9607) Preventing template deletion when template is in use.

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876604#comment-15876604
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9607:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
Trillian test result (tid-874)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 34092 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1773-t874-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 355.70 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 160.18 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 71.18 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 246.46 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 289.60 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 533.70 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 512.87 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1406.40 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 544.89 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 749.53 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1290.57 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.51 | test_volumes.py
test_08_resize_volume | Success | 156.44 | test_volumes.py
test_07_resize_fail | Success | 161.58 | test_volumes.py
test_06_download_detached_volume | Success | 156.31 | test_volumes.py
test_05_detach_volume | Success | 155.74 | test_volumes.py
test_04_delete_attached_volume | Success | 151.24 | test_volumes.py
test_03_download_attached_volume | Success | 156.34 | test_volumes.py
test_02_attach_volume | Success | 95.61 | test_volumes.py
test_01_create_volume | Success | 716.35 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.22 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 100.67 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.69 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 257.71 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.73 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.98 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.82 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.82 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.39 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.34 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 60.60 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 50.53 | test_templates.py
test_10_destroy_cpvm | Success | 166.62 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.72 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.61 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.52 | test_ssvm.py
test_06_stop_cpvm | Success | 131.71 | test_ssvm.py
test_05_stop_ssvm | Success | 133.80 | test_ssvm.py
test_04_cpvm_internals | Success | 1.19 | test_ssvm.py
test_03_ssvm_internals | Success | 3.36 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.11 | test_snapshots.py
test_04_change_offering_small | Success | 239.70 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py

[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876599#comment-15876599
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102302052
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
--- End diff --

@HrWiggles Addressed.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876597#comment-15876597
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102301869
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
--- End diff --

@HrWiggles Will check if I can write a test for the same.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876595#comment-15876595
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102301645
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
--- End diff --

@HrWiggles Thanks for the review. The max data volumes here is the actual 
hypervisor capability (which is posted in the db). The device id 3 is being 
reserved for something since long and I don't want that to be effected.
When _getMaxDataVolumesSupported()_ returns 6, max 5 volumes can be 
attached to the VM and one device reserved (might be for virtual tools/CDROM). 
_maxDataVolumesSupported_ specifies the data volumes limit supported by 
hypervisor, nothing related to _maxDeviceId_.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876593#comment-15876593
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-520


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...

[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876582#comment-15876582
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102300083
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I know you did not write this code, but it seemed a good opportunity to 
discuss and evaluate it. 

I understood you. I already noticed the try/finally block, and this is the 
point I wanted to discuss. As the example you described, if an exception 
happens, the finally block is executed and the state is restored to a previous 
one (assuming that the `stateTransitTo(vm, event, hostId)` will change the 
step); and this makes sense in the case of an exception. However, if `NO` 
exception happens, the step is also reverted to a previous one (assuming that 
the `stateTransitTo(vm, event, hostId)` will change the step) . The `finally ` 
is always executed; either with successful or unsuccessful execution of 
`stateTransitTo(vm, event, hostId)`.

If we wanted to deal with exceptions, it would make much more sense 
executing the revert on a `catch` block. I think that we want/need to change 
the step for `null` when `stateTransitTo` return false and the `previousStep` 
is null . You are changing exactly that with the extra condition at line 757. 

Did you understand what I mean?


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876571#comment-15876571
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102298974
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Oh, and to your point earlier, getStep() generally shouldn't ever return a 
null I don't *think* , because the step column in the op_it_work table is 
marked not null.


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876542#comment-15876542
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan package


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is 

[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876558#comment-15876558
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9698:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1861
  
yes @sateesh-chodapuneedi this failure has been a pain for a while... it'll 
be good to invest some time in fixing it..


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876546#comment-15876546
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain&id=1910a3dc-6fa6-457b-ab3a-602b0cfb6686&cleanup=true&response=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-

[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876548#comment-15876548
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102295771
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Most of this code I didn't write, but I can make some guesses:

_workDao.updateStep(work, previousStep) line is in the finally block, which 
will execute even if an exception is thrown in stateTransitTo (like 
NoTransitException for instance).  So if stateTransitTo a) returns a false, or 
b) throw an exception, then result will be false, and line 758 will run.  So if 
something happens that the state isn't transitioned, someone wanted the work 
reverted to its previous step value.  Sort of a rollback maybe?

In the case of the VM hung in starting, my desired side effect is I want 
stateTransitTo to be called and set the state to Stopped , i.e., 
Event.AgentReportStopped -> State.Stopped .  The work has already expired at 
this point, so it is null.  I was trying to preserve the same behavior as 
before when work was not null.

Sorry if this wasn't very clear.


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876526#comment-15876526
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102294407
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
--- End diff --

I think this is a good question. I would add that the ID `0` is reserved 
for the root device.

So, I add the question if the `maxDataVolumesSupported` already accounts 
for the one already reserved for the root disk. 

For instance, when configuring `getMaxDataVolumesSupported(vm)` with `6` 
for the hypervisor of the VM. Does that mean that the VM can have up to 7 
devices? 1 root (id `0`), 1 CD-ROM (id `3`) and other 5 for extra disks/volumes.



> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876514#comment-15876514
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102292787
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Ok, now I think I am starting to get it.
But I am still not sure about some things here, would you mind continue 
discussing?

If the work is not null, you get the previous step (let’s assume it is not 
null) and call the method ` _workDao.updateStep(work, step)`. After this, you 
call ` stateTransitTo(vm, event, hostId)`. Why do we need to call ` 
_workDao.updateStep(work, previousStep)` again? The ` previousStep ` continues 
to be the same.


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876504#comment-15876504
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102291066
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
--- End diff --

big 👍 for this request :)


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876496#comment-15876496
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102290456
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

@rafaelweingartner if work is null, previousStep will stay null.  Maybe not 
the clearest way to handle this, but this prevents a null work from being 
passed down below.


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9796) Null Pointer Exception in VirtualMachineManagerImpl.java

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876494#comment-15876494
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9796:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102289931
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Can " work.getStep()" return null? 
I see that you add a check at line 757 `previousStep != null`. Why would we 
need that check there, and not need it here (line750)?


> Null Pointer Exception in VirtualMachineManagerImpl.java
> 
>
> Key: CLOUDSTACK-9796
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9796
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0
> Environment: Cloudstack 4.8
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
> Attachments: npelog.txt
>
>
> When a situation occurs where a VM hangs in the "Starting" state for longer 
> than the job.expire.minutes, and the job is deleted from the system, a null 
> pointer exception will occur because the work VO will be null inside of 
> advanceStop in VirtualMachineManagerImpl.java .  I have attached a snippet of 
> a log file of this NPE occurring in the wild.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876488#comment-15876488
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@Ashadeepa why do we have 2 PRs for the same problem?
It seems that one of them can be closed.


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876415#comment-15876415
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user remibergsma commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@sureshanaparti why do the mac addresses need to be the same on both 
routers? We're also executing arpings to update our neighbours. Networking wise 
there is no need for them to be the same. I've seen it on other parts of the 
code as well and I really wonder why we do this.


> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> *r-43585-VM (Backup)*
> 6: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 02:00:5a:e6:6c:d5 brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> CloudStack should ensure that the NICs for all public networks have matching 
> MACs.
> REPRO STEPS
> ==
> 1) Set up redundant VR.
> 2) Set up multiple public networks on different VLANs.
> 3) Acquire IPs in the RVR network until the VRs get IPs in the different 
> public networks.
> 4) Confirm the mismatching MAC addresses.
> EXPECTED BEHAVIOR
> ==
> Redundant VRs have matching MACs for all public networks.
> ACTUAL BEHAVIOR
> ==
> Redundant VRs have matching MACs only for the first public network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876408#comment-15876408
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
Trillian test result (tid-873)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33766 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1878-t873-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 362.57 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 334.37 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.03 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 159.36 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 65.83 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 239.78 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 274.04 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 531.49 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 509.60 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1410.18 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 536.46 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 747.15 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.49 | test_volumes.py
test_08_resize_volume | Success | 156.71 | test_volumes.py
test_07_resize_fail | Success | 156.11 | test_volumes.py
test_06_download_detached_volume | Success | 155.99 | test_volumes.py
test_05_detach_volume | Success | 150.63 | test_volumes.py
test_04_delete_attached_volume | Success | 150.93 | test_volumes.py
test_03_download_attached_volume | Success | 156.02 | test_volumes.py
test_02_attach_volume | Success | 95.59 | test_volumes.py
test_01_create_volume | Success | 711.06 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.18 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.70 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.66 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 267.19 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.64 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.11 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.67 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.92 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 126.20 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.13 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.26 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 60.49 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.02 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 5.14 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 30.29 | test_templates.py
test_10_destroy_cpvm | Success | 131.42 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.49 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.51 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.41 | test_ssvm.py
test_06_stop_cpvm | Success | 131.56 | test_ssvm.py
test_05_stop_ssvm | Success | 133.54 | test_ssvm.py
test_04_cpvm_internals | Success | 1.14 | test_ssvm.py
test_03_ssvm_internals | Success | 3.29 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.08 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 10.94 | test_snapshots.py
test_04_change_offering_small | Success | 239.55 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
   

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876318#comment-15876318
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102265306
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
--- End diff --

Done, thanks @rafaelweingartner


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> dom

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876315#comment-15876315
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102265150
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = networkId

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876312#comment-15876312
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102264841
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = networkId

[jira] [Commented] (CLOUDSTACK-9793) Unnecessary conversion from IPNetwork to list causes router slowdown when processing static Nat rules

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876279#comment-15876279
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9793:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1948
  
@ProjectMoon may I ask you a question?
The "net" object is already an array/map, right?


> Unnecessary conversion from IPNetwork to list causes router slowdown when 
> processing static Nat rules
> -
>
> Key: CLOUDSTACK-9793
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9793
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Stefania Bergljot Stefansdottir
> Fix For: 4.10.0.0
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> In the CsInterface class in CsAddress.py on the virtual router there's a 
> function
> {code:java}
> def ip_in_subnet(self, ip):
> ipo = IPAddress(ip)
> net = IPNetwork("%s/%s" % (self.get_ip(), self.get_size()))
> return ipo in list(net)
> {code}
> Skipping the list conversion and using "return ipo in net" is much faster and 
> the functionality is the same. It can prevent a router timeout when attaching 
> or detaching multiple IPs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9793) Unnecessary conversion from IPNetwork to list causes router slowdown when processing static Nat rules

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876275#comment-15876275
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9793:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1948
  
Trillian test result (tid-870)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33543 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1948-t870-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 874.34 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 379.02 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 330.77 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 160.11 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.25 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.97 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 298.25 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 526.31 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 506.37 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1402.83 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 566.05 | test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.61 | test_volumes.py
test_08_resize_volume | Success | 156.45 | test_volumes.py
test_07_resize_fail | Success | 161.63 | test_volumes.py
test_06_download_detached_volume | Success | 156.41 | test_volumes.py
test_05_detach_volume | Success | 156.70 | test_volumes.py
test_04_delete_attached_volume | Success | 151.31 | test_volumes.py
test_03_download_attached_volume | Success | 151.39 | test_volumes.py
test_02_attach_volume | Success | 96.19 | test_volumes.py
test_01_create_volume | Success | 717.60 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.60 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.63 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.89 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 248.34 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.72 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 30.86 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.98 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.88 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.40 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.55 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.08 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 45.46 | test_templates.py
test_10_destroy_cpvm | Success | 191.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 133.65 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.59 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.57 | test_ssvm.py
test_06_stop_cpvm | Success | 131.89 | test_ssvm.py
test_05_stop_ssvm | Success | 163.73 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 4.20 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.15 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.34 | test_snapshots.py
test_04_change_offering_small | Success | 210.37 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.06 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.07 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
  

[jira] [Commented] (CLOUDSTACK-9746) system-vm: logrotate config causes critical failures

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876189#comment-15876189
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9746:


Github user serbaut commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
Ok. I removed it from rsyslog since it should be safe there.


> system-vm: logrotate config causes critical failures
> 
>
> Key: CLOUDSTACK-9746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9746
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM
>Affects Versions: 4.8.0, 4.9.0
>Reporter: Joakim Sernbrant
>Priority: Critical
>
> CLOUDSTACK-6885 changed logrotate from time based to size based. This means 
> that logs will grow up to its size times two (due to delaycompress).
> For example:
> 50M auth.log
> 50M auth.log.1
> 10M cloud.log
> 10M cloud.log.1
> 50M cron.log
> 50M cron.log.1
> 50M messages
> 50M messages.1
> ...
> Some files will grow slowly but eventually they will get to their max size. 
> The total allowed log size with the current config is well beyond the size of 
> the log partition.
> Having a full /dev/log puts the VR in a state where operations on it 
> critically fails.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9746) system-vm: logrotate config causes critical failures

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876184#comment-15876184
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9746:


Github user leprechau commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
We always want `compress` ... but the only time you need or want 
`delaycompress` is if you can't be sure that the program writing to the log can 
be successfully told to stop appending to that log.  In the case where you are 
certain that the writing program is going to do the right thing there is no 
need to add `delaycompress` as it just takes up extra space in already rotated 
logs until the next iteration.


> system-vm: logrotate config causes critical failures
> 
>
> Key: CLOUDSTACK-9746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9746
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM
>Affects Versions: 4.8.0, 4.9.0
>Reporter: Joakim Sernbrant
>Priority: Critical
>
> CLOUDSTACK-6885 changed logrotate from time based to size based. This means 
> that logs will grow up to its size times two (due to delaycompress).
> For example:
> 50M auth.log
> 50M auth.log.1
> 10M cloud.log
> 10M cloud.log.1
> 50M cron.log
> 50M cron.log.1
> 50M messages
> 50M messages.1
> ...
> Some files will grow slowly but eventually they will get to their max size. 
> The total allowed log size with the current config is well beyond the size of 
> the log partition.
> Having a full /dev/log puts the VR in a state where operations on it 
> critically fails.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9746) system-vm: logrotate config causes critical failures

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876177#comment-15876177
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9746:


Github user serbaut commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
Is it safe to remove delaycompress across the board, I assume it is there 
for a reason?


> system-vm: logrotate config causes critical failures
> 
>
> Key: CLOUDSTACK-9746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9746
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM
>Affects Versions: 4.8.0, 4.9.0
>Reporter: Joakim Sernbrant
>Priority: Critical
>
> CLOUDSTACK-6885 changed logrotate from time based to size based. This means 
> that logs will grow up to its size times two (due to delaycompress).
> For example:
> 50M auth.log
> 50M auth.log.1
> 10M cloud.log
> 10M cloud.log.1
> 50M cron.log
> 50M cron.log.1
> 50M messages
> 50M messages.1
> ...
> Some files will grow slowly but eventually they will get to their max size. 
> The total allowed log size with the current config is well beyond the size of 
> the log partition.
> Having a full /dev/log puts the VR in a state where operations on it 
> critically fails.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876126#comment-15876126
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1878#discussion_r102226833
  
--- Diff: 
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
 ---
@@ -2071,6 +2120,14 @@ protected StartAnswer execute(StartCommand cmd) {
 }
 }
 
+private void replaceNicsMacSequenceInBootArgs(String oldMacSequence, 
String newMacSequence, VirtualMachineTO vmSpec) {
+String bootArgs = vmSpec.getBootArgs();
+if (!StringUtils.isEmpty(bootArgs) && 
!StringUtils.isEmpty(oldMacSequence) && !StringUtils.isEmpty(newMacSequence)) {
+//Update boot args with the new nic mac addresses
--- End diff --

What about moving this comment to the method documentation?
Also, how do you feel about test cases? The method is pretty simple and it 
will not be hard to write some unit test for it.



> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> *r-43585-VM (Backup)*
> 6: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 02:00:5a:e6:6c:d5 brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> CloudStack should ensure that the NICs for all public networks have matching 
> MACs.
> REPRO STEPS
> ==
> 1) Set up redundant VR.
> 2) Set up multiple public networks on different VLANs.
> 3) Acquire IPs in the RVR network until the VRs get IPs in the different 
> public networks.
> 4) Confirm the mismatching MAC addresses.
> EXPECTED BEHAVIOR
> ==
> Redundant VRs have matching MACs for all public networks.
> ACTUAL BEHAVIOR
> ==
> Redundant VRs have matching MACs only for the first public network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8239) Add support for VirtIO-SCSI for KVM hypervisors

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876110#comment-15876110
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8239:


Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1955
  
We are deploying this to our QA environment right now and hope to have it 
tested in a few days.  Great work @kiwiflyer and @nathanejohnson.


> Add support for VirtIO-SCSI for KVM hypervisors
> ---
>
> Key: CLOUDSTACK-8239
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Storage Controller
>Affects Versions: 4.6.0
> Environment: KVM
>Reporter: Andrei Mikhailovsky
>Assignee: Wido den Hollander
>Priority: Critical
>  Labels: ceph, gsoc2017, kvm, libvirt, rbd, storage_drivers, 
> virtio
> Fix For: Future
>
>
> It would be nice to have support for virtio-scsi for KVM hypervisors.
> The reason for using virtio-scsi instead of virtio-blk would be increasing 
> the number of devices you can attach to a vm, have ability to use discard and 
> reclaim unused blocks from the backend storage like ceph rbd. There are also 
> talks about having a greater performance advantage as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9746) system-vm: logrotate config causes critical failures

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876104#comment-15876104
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9746:


Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
@serbaut I agree with @ustcweizhou.  Please remove delaycompress and up to 
10.  I'd like to get this PR in as it is the second part of the problem 
resolution for my issue.  After that LGTM.


> system-vm: logrotate config causes critical failures
> 
>
> Key: CLOUDSTACK-9746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9746
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM
>Affects Versions: 4.8.0, 4.9.0
>Reporter: Joakim Sernbrant
>Priority: Critical
>
> CLOUDSTACK-6885 changed logrotate from time based to size based. This means 
> that logs will grow up to its size times two (due to delaycompress).
> For example:
> 50M auth.log
> 50M auth.log.1
> 10M cloud.log
> 10M cloud.log.1
> 50M cron.log
> 50M cron.log.1
> 50M messages
> 50M messages.1
> ...
> Some files will grow slowly but eventually they will get to their max size. 
> The total allowed log size with the current config is well beyond the size of 
> the log partition.
> Having a full /dev/log puts the VR in a state where operations on it 
> critically fails.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876079#comment-15876079
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user Ashadeepa commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1957#discussion_r10411
  
--- Diff: server/src/com/cloud/network/vpn/RemoteAccessVpnManagerImpl.java 
---
@@ -621,6 +627,10 @@ public void 
doInTransactionWithoutResult(TransactionStatus status) {
 sc.setParameters("username", username);
 }
 
+if (keyword!= null) {
--- End diff --

@ustcweizhou : My bad. Amended the changes . Thanks.


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876047#comment-15876047
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@ustcweizhou : My bad. Amended the changes . Thanks.


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9607) Preventing template deletion when template is in use.

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876010#comment-15876010
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9607:


Github user jburwell commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
@priyankparihar I agree with @ustcweizhou regarding the default value of 
`forced` in terms of backwards compatibility.

Also, why we permit deletion of a template when it is associated with one 
or more active volumes?  It seems like we are giving the user the means to 
corrupt their system.


> Preventing template deletion when template is in use.
> -
>
> Key: CLOUDSTACK-9607
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9607
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
>
> Consider this scenario:
> 1. User launches a VM from Template and keep it running
> 2. Admin logins and deleted that template [CloudPlatform does not check 
> existing / running VM etc. while the deletion is done]
> 3. User resets the VM
> 4. CloudPlatform fails to star the VM as it cannot find the corresponding 
> template.
> It throws error as 
> java.lang.RuntimeException: Job failed due to exception Resource [Host:11] is 
> unreachable: Host 11: Unable to start instance due to can't find ready 
> template: 209 for data center 1
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:113)
> at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:495)
> Client is requesting better handing of this scenario. We need to check 
> existing / running VM's when the template is deleted and warn admin about the 
> possible issue that may occur.
> REPRO STEPS
> ==
> 1. Launches a VM from Template and keep it running
> 2. Now delete that template 
> 3. Reset the VM
> 4. CloudPlatform fails to star the VM as it cannot find the corresponding 
> template.
> EXPECTED BEHAVIOR
> ==
> Cloud platform should throw some warning message while the template is 
> deleted if that template is being used by existing / running VM's
> ACTUAL BEHAVIOR
> ==
> Cloud platform does not throw as waring etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875988#comment-15875988
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1957#discussion_r102209429
  
--- Diff: server/src/com/cloud/network/vpn/RemoteAccessVpnManagerImpl.java 
---
@@ -621,6 +627,10 @@ public void 
doInTransactionWithoutResult(TransactionStatus status) {
 sc.setParameters("username", username);
 }
 
+if (keyword!= null) {
--- End diff --

it seems line 630 to 633 are not needed


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875947#comment-15875947
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1910
  
@ustcweizhou : Thanks. I have made the changes.

New PR : https://github.com/apache/cloudstack/pull/1957. 


> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9748) VPN Users search functionality broken

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875946#comment-15875946
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9748:


GitHub user Ashadeepa opened a pull request:

https://github.com/apache/cloudstack/pull/1957

CLOUDSTACK-9748:VPN Users search functionality broken

VPN Users search functionality broken
If you try to search VPN users with it’s user name, you will not be able to 
search.

Fixed the same.

Parent PR : https://github.com/apache/cloudstack/pull/1910

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9748

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1957.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1957


commit 588ececd045c9175b33647375fd702e3e37f2126
Author: root 
Date:   2017-01-17T18:09:17Z

CLOUDSTACK-9748:VPN Users search functionality broken




> VPN Users search functionality broken
> -
>
> Key: CLOUDSTACK-9748
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9748
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ashadeepa Debnath
>
> VPN Users search functionality broken
> If you try to search VPN users with it’s user name, you will not be able to 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875919#comment-15875919
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9698:


Github user sateesh-chodapuneedi commented on the issue:

https://github.com/apache/cloudstack/pull/1861
  
@borisstoyanov, thanks for running tets.

I see 1 test error in the above results, this test has been failing in many 
other PRs as well, and doesn't seem related to changes here?

`2017-02-21 00:56:54,625 - CRITICAL - FAILED: 
test_04_rvpc_privategw_static_routes: ['Traceback (most recent call last):\n', 
'  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run\n
testMethod()\n', '  File "/marvin/tests/smoke/test_privategw_acl.py", line 295, 
in test_04_rvpc_privategw_static_routes\nself.performVPCTests(vpc_off)\n', 
'  File "/marvin/tests/smoke/test_privategw_acl.py", line 362, in 
performVPCTests\nself.check_pvt_gw_connectivity(vm1, public_ip_1, 
[vm2.nic[0].ipaddress, vm1.nic[0].ipaddress])\n', '  File 
"/marvin/tests/smoke/test_privategw_acl.py", line 724, in 
check_pvt_gw_connectivity\n"Ping to VM on Network Tier N from VM in Network 
Tier A should be successful at least for 2 out of 3 VMs"\n', '  File 
"/usr/lib64/python2.7/unittest/case.py", line 462, in assertTrue\nraise 
self.failureException(msg)\n', 'AssertionError: Ping to VM on Network Tier N 
from VM in Network Tier A should be successful at least for 2 out of 3 VMs\n']
`


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8882) Network offering usage is sometimes greater than aggregation range

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875903#comment-15875903
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8882:


Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/859
  
@kishankavala i forgot about this one, guess we won't make 4.7 ;)

can you appease cloudmonger?


> Network offering usage is sometimes greater than aggregation range
> --
>
> Key: CLOUDSTACK-8882
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8882
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> Create a Vm with mutiple nics:
>  - If 2 networks use same network offering, network offering usage will be 
> 48hrs (assuming 24hrs aggregation)
> - Usage should be reported per Nic instead of network offering



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9657) Ipset command fails for VM's with long internal name

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875884#comment-15875884
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9657:


Github user jayapalu commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1824#discussion_r102192868
  
--- Diff: scripts/vm/hypervisor/xenserver/vmops ---
@@ -232,28 +233,50 @@ def deleteFile(session, args):
 
 return txt
 
+#using all the iptables chain names length to 24 because cleanup_rules 
groups the vm chain excluding -def,-eg
+#to avoid multiple iptables chains for single vm there using length 24
 def chain_name(vm_name):
 if vm_name.startswith('i-') or vm_name.startswith('r-'):
 if vm_name.endswith('untagged'):
 return '-'.join(vm_name.split('-')[:-1])
 if len(vm_name) > 28:
--- End diff --

Updated it.


> Ipset command fails for VM's with long internal name
> 
>
> Key: CLOUDSTACK-9657
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9657
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.1.0
>
>
> ipset rules configuration in security groups is failing for the VM with 
> longer name 
> ipset -N 12345677 nethash
> ipset v6.11: Syntax error: setname '12345677' is 
> longer than 31 characters



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9721) Remove deprecated/unused global configuration parameter - consoleproxy.loadscan.interval

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875874#comment-15875874
 ] 

ASF subversion and git services commented on CLOUDSTACK-9721:
-

Commit fe555e194e753b01a2e2bd9fdd4ec2e4dd96a0fc in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fe555e1 ]

Merge pull request #1881 from Accelerite/CLOUDSTACK-9721

CLOUDSTACK-9721: Remove deprecated/unused global configuration parameter - 
consoleproxy.loadscan.interval

* pr/1881:
  CLOUDSTACK-9721: Remove deprecated/unused global configuration parameter - 
consoleproxy.loadscan.interval

Signed-off-by: Rajani Karuturi 


> Remove deprecated/unused global configuration parameter - 
> consoleproxy.loadscan.interval
> 
>
> Key: CLOUDSTACK-9721
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9721
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack doesn't uses "consoleproxy.loadscan.interval" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9721) Remove deprecated/unused global configuration parameter - consoleproxy.loadscan.interval

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875876#comment-15875876
 ] 

ASF subversion and git services commented on CLOUDSTACK-9721:
-

Commit fe555e194e753b01a2e2bd9fdd4ec2e4dd96a0fc in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fe555e1 ]

Merge pull request #1881 from Accelerite/CLOUDSTACK-9721

CLOUDSTACK-9721: Remove deprecated/unused global configuration parameter - 
consoleproxy.loadscan.interval

* pr/1881:
  CLOUDSTACK-9721: Remove deprecated/unused global configuration parameter - 
consoleproxy.loadscan.interval

Signed-off-by: Rajani Karuturi 


> Remove deprecated/unused global configuration parameter - 
> consoleproxy.loadscan.interval
> 
>
> Key: CLOUDSTACK-9721
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9721
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack doesn't uses "consoleproxy.loadscan.interval" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9721) Remove deprecated/unused global configuration parameter - consoleproxy.loadscan.interval

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875880#comment-15875880
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9721:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1881


> Remove deprecated/unused global configuration parameter - 
> consoleproxy.loadscan.interval
> 
>
> Key: CLOUDSTACK-9721
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9721
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack doesn't uses "consoleproxy.loadscan.interval" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8737) Remove out-of-band VR reboot code based on persistent VR configuration changes

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875877#comment-15875877
 ] 

ASF subversion and git services commented on CLOUDSTACK-8737:
-

Commit 50147a4208f5047a6ec69239b5c8099082523eb5 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=50147a4 ]

Merge pull request #1882 from Accelerite/CLOUDSTACK-8737_CodeCleanup

CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required 
based on persistent VR changes.

* pr/1882:
  CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required 
based on persistent VR changes.

Signed-off-by: Rajani Karuturi 


> Remove out-of-band VR reboot code based on persistent VR configuration changes
> --
>
> Key: CLOUDSTACK-8737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.6.0
>
>
> VR reboot was required to reprogram rules in case it was stopped and started 
> outside of CS. With persistent VR configuration changes (added in 4.6) the 
> rules are persisted across a stop-start of VR. So no need to do VR reboot. 
> Refer to the following discussion on dev list.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201506.mbox/%3cac13e3c1-3719-4b48-a35d-dbc4ba704...@schubergphilis.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8737) Remove out-of-band VR reboot code based on persistent VR configuration changes

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875881#comment-15875881
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8737:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1882


> Remove out-of-band VR reboot code based on persistent VR configuration changes
> --
>
> Key: CLOUDSTACK-8737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.6.0
>
>
> VR reboot was required to reprogram rules in case it was stopped and started 
> outside of CS. With persistent VR configuration changes (added in 4.6) the 
> rules are persisted across a stop-start of VR. So no need to do VR reboot. 
> Refer to the following discussion on dev list.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201506.mbox/%3cac13e3c1-3719-4b48-a35d-dbc4ba704...@schubergphilis.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8737) Remove out-of-band VR reboot code based on persistent VR configuration changes

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875879#comment-15875879
 ] 

ASF subversion and git services commented on CLOUDSTACK-8737:
-

Commit 50147a4208f5047a6ec69239b5c8099082523eb5 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=50147a4 ]

Merge pull request #1882 from Accelerite/CLOUDSTACK-8737_CodeCleanup

CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required 
based on persistent VR changes.

* pr/1882:
  CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required 
based on persistent VR changes.

Signed-off-by: Rajani Karuturi 


> Remove out-of-band VR reboot code based on persistent VR configuration changes
> --
>
> Key: CLOUDSTACK-8737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.6.0
>
>
> VR reboot was required to reprogram rules in case it was stopped and started 
> outside of CS. With persistent VR configuration changes (added in 4.6) the 
> rules are persisted across a stop-start of VR. So no need to do VR reboot. 
> Refer to the following discussion on dev list.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201506.mbox/%3cac13e3c1-3719-4b48-a35d-dbc4ba704...@schubergphilis.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9721) Remove deprecated/unused global configuration parameter - consoleproxy.loadscan.interval

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875873#comment-15875873
 ] 

ASF subversion and git services commented on CLOUDSTACK-9721:
-

Commit fe555e194e753b01a2e2bd9fdd4ec2e4dd96a0fc in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fe555e1 ]

Merge pull request #1881 from Accelerite/CLOUDSTACK-9721

CLOUDSTACK-9721: Remove deprecated/unused global configuration parameter - 
consoleproxy.loadscan.interval

* pr/1881:
  CLOUDSTACK-9721: Remove deprecated/unused global configuration parameter - 
consoleproxy.loadscan.interval

Signed-off-by: Rajani Karuturi 


> Remove deprecated/unused global configuration parameter - 
> consoleproxy.loadscan.interval
> 
>
> Key: CLOUDSTACK-9721
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9721
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack doesn't uses "consoleproxy.loadscan.interval" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8737) Remove out-of-band VR reboot code based on persistent VR configuration changes

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875878#comment-15875878
 ] 

ASF subversion and git services commented on CLOUDSTACK-8737:
-

Commit 50147a4208f5047a6ec69239b5c8099082523eb5 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=50147a4 ]

Merge pull request #1882 from Accelerite/CLOUDSTACK-8737_CodeCleanup

CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required 
based on persistent VR changes.

* pr/1882:
  CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required 
based on persistent VR changes.

Signed-off-by: Rajani Karuturi 


> Remove out-of-band VR reboot code based on persistent VR configuration changes
> --
>
> Key: CLOUDSTACK-8737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.6.0
>
>
> VR reboot was required to reprogram rules in case it was stopped and started 
> outside of CS. With persistent VR configuration changes (added in 4.6) the 
> rules are persisted across a stop-start of VR. So no need to do VR reboot. 
> Refer to the following discussion on dev list.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201506.mbox/%3cac13e3c1-3719-4b48-a35d-dbc4ba704...@schubergphilis.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8737) Remove out-of-band VR reboot code based on persistent VR configuration changes

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875872#comment-15875872
 ] 

ASF subversion and git services commented on CLOUDSTACK-8737:
-

Commit 0f35241aade597be8114afff74a0b2de0b91560d in cloudstack's branch 
refs/heads/master from [~sureshkumar.anaparti]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=0f35241 ]

CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required 
based on persistent VR changes.


> Remove out-of-band VR reboot code based on persistent VR configuration changes
> --
>
> Key: CLOUDSTACK-8737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.6.0
>
>
> VR reboot was required to reprogram rules in case it was stopped and started 
> outside of CS. With persistent VR configuration changes (added in 4.6) the 
> rules are persisted across a stop-start of VR. So no need to do VR reboot. 
> Refer to the following discussion on dev list.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201506.mbox/%3cac13e3c1-3719-4b48-a35d-dbc4ba704...@schubergphilis.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9721) Remove deprecated/unused global configuration parameter - consoleproxy.loadscan.interval

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875871#comment-15875871
 ] 

ASF subversion and git services commented on CLOUDSTACK-9721:
-

Commit da7148a13ed033421b2894684ff0ec34f31cb02b in cloudstack's branch 
refs/heads/master from [~sureshkumar.anaparti]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=da7148a ]

CLOUDSTACK-9721: Remove deprecated/unused global configuration parameter - 
consoleproxy.loadscan.interval


> Remove deprecated/unused global configuration parameter - 
> consoleproxy.loadscan.interval
> 
>
> Key: CLOUDSTACK-9721
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9721
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack doesn't uses "consoleproxy.loadscan.interval" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9733) Concurrent volume snapshots of a VM are not allowed and are not limited per host as per the global configuration parameter "concurrent.snapshots.threshold.perhost"

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875863#comment-15875863
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9733:


Github user ramkatru commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@sureshanaparti, please look into these failures.


> Concurrent volume snapshots of a VM are not allowed and are not limited per 
> host as per the global configuration parameter 
> "concurrent.snapshots.threshold.perhost".
> 
>
> Key: CLOUDSTACK-9733
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9733
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot, Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> Pre-CloudStack 4.4.0, before the VM job framework changes (CLOUDSTACK-669), 
> Concurrent volume (both root and data) snapshots were allowed per host based 
> on the value of global config "concurrent.snapshots.threshold.perhost". The 
> volumes could belong to the same VM or spread across multiple VMs on a given 
> host. The synchronisation was done based on the host (Id).
> As part of the VM job framework changes (CLOUDSTACK-669) in CloudStack 4.4.0, 
> a separate job queue was introduced for individual VMs with a concurrency 
> level of 1 (i.e. all operations to a given VM are serialized). Volume 
> snapshot was also considered as a VM operation as part of these changes  and 
> goes through the VM job queue. These changes made the config 
> "concurrent.snapshots.threshold.perhost" obsolete (it was also no longer 
> getting honoured, since there is no single point of enforcement).
> Only one volume snapshot of a VM is allowed at any given point of time as the 
> sync object is the VM (id). So concurrent volume snapshots of a VM are not 
> allowed and are not limited per host as per the global configuration 
> parameter "concurrent.snapshots.threshold.perhost".
> This functionality needs to be re-introduced to execute more than 1 snapshot 
> of a VM at a time (when the underlying hypervisor supports) and snapshots 
> should be limited per host based on the value of 
> "concurrent.snapshots.threshold.perhost" at the cluster level (for more 
> flexibility).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875846#comment-15875846
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102187534
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
--- End diff --

How about adding unit tests for the method `getDeviceId(UserVmVO vm, Long 
deviceId)`?
Things I can currently think of to test:
- `RuntimeException` if param `deviceId` is specified as a negative value
- `RuntimeException` if param `deviceId` is specified as `0L`
- `RuntimeException` if param `deviceId` is specified as a value greater 
than the "max-device-id"
- `RuntimeException` if param `deviceId` is specified as reserved id `3L`
- `RuntimeException` if param `deviceId` is specified as an id that is 
already in use
- `RuntimeException` if param `deviceId` is specified as `null` and all 
device ids are in use
- returns id specified in param `deviceId` when not `null` and the id is 
not in use
- returns lowest available id when param `deviceId` is specified as `null`

(all of the above are from my understanding of how the method should behave)


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875847#comment-15875847
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102184780
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
--- End diff --

Not part of your changes but... variable `devIds` should have type 
`List` instead of `List`.
All that conversion from `int` to `String` and then converting from 
`String` to `long` seems unnecessary.
Should simply be able to do:
```
List devIds = new ArrayList<>();
for (long i = 1; i <= maxDataVolumesSupported; i++) {
devIds.add(i);
}
devIds.remove(3L);
for (VolumeVO vol : vols) {
devIds.remove(vol.getDeviceId());
}
if (devIds.isEmpty()) {
throw new RuntimeException("every available deviceId already in use by 
vm " + vm.getId());
}
deviceId = devIds.iterator().next();
```
Note: my code above includes fixes to two other comments I made further 
down in the code.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >