[GitHub] cloudstack issue #1957: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread ustcweizhou
Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
tested. LGTM

btw, you can use "git push --force" to overwrite the code


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: apidocs build failure

2017-02-21 Thread Rohit Yadav
Will, any specific use-case? For a live running cloudstack an apidocs plugin 
could also be written that displays apidocs for a role/user-account consuming 
from listApis.


Regards.


From: williamstev...@gmail.com  on behalf of Will 
Stevens 
Sent: 21 February 2017 20:31:14
To: dev@cloudstack.apache.org
Subject: Re: apidocs build failure

Is there any chance we can fix the 'roles' issue with the API doc so we can
get the docs split into the 'Admin', 'Domain Admin' and 'User' again?  The
introduction of the dynamic roles broke the generation of the API docs with
the different roles and I was not able to figure out how to fix it.  Any
ideas for how to fix that?

*Will STEVENS*
Lead Developer



On Tue, Feb 21, 2017 at 3:01 AM, Daan Hoogland 
wrote:

> @Rajani @Rohit I missed this mail and fixed the apidoc on build.a.o
> yesterday. I can disable it or throw it away might we so wish
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>
>
>
>
> -Original Message-
> From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com]
> Sent: 17 February 2017 10:27
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> Thanks Rajani, I've no objections.
>
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 17 February 2017 14:07:34
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> since travis is already verifying this, I asked infra to disable this job.
>
> Infra ticket https://issues.apache.org/jira/browse/INFRA-13527
>
> Please comment on the ticket if you think otherwise.
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On February 13, 2017 at 12:29 PM, Rohit Yadav
> (rohit.ya...@shapeblue.com) wrote:
>
> Jenkins need to have jdk8 available, someone need to setup jenv on it as
> well.
>
> (The first job in Travis does apidocs/marvin/rat related checks to
> validate changes and apidocs build).
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 09 February 2017 12:21:40
> To: dev@cloudstack.apache.org
> Subject: apidocs build failure
>
> Hi all,
>
> All the apidocs builds[1] are failing after the recent java 8 change. Can
> anyone having access fix it? Or should we talk to INFRA about it?
>
> Error message:
>
> [INFO]
> -
> [ERROR] COMPILATION ERROR : [INFO]
> -
> [ERROR] javac: invalid target release: 1.8 Usage: javac use -help for a
> list of possible options
>
> [1] https://builds.apache.org/job/cloudstack-apidocs-master/
>
> Thanks
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com ( http://www.shapeblue.com )
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



[GitHub] cloudstack issue #1768: CLOUDSTACK 9601: Upgrade: change logic for update pa...

2017-02-21 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1768
  
I'll try to make my point clearer with a better use case. Let say you were 
running version ACS 4.4.2 and wish to upgrade to 4.7.1. After installing the 
4.7.1, when ACS starts for the first time you will execute SQL scripts in that 
order (case A):
```
schema-442to450.sql   -> schema-442to450-cleanup.sql
  |  | |
  v  | v
schema-450to451.sql  |   schema-450to451-cleanup.sql
  |  | |
  v  | v
schema-451to452.sql  |   schema-451to452-cleanup.sql
  |  | |
  v  | v
schema-452to460.sql  |   schema-452to460-cleanup.sql
  |  | |
  v  | v
schema-460to461.sql  |   schema-460to461-cleanup.sql
  |  | |
  v  | v
schema-461to470.sql  |   schema-461to470-cleanup.sql
  |  | |
  v  | v
schema-470to471.sql >schema-470to471-cleanup.sql
```

But if you would have updated to each versions, one after the other, you 
would have run those scripts in that order (case B):
```
schema-442to450.sql -> schema-442to450-cleanup.sql
 |
   --
  |
  v
schema-450to451.sql -> schema-450to451-cleanup.sql
 |
   --
  |
  v
schema-451to452.sql -> schema-451to452-cleanup.sql
 |
   --
  |
  v
schema-452to460.sql -> schema-452to460-cleanup.sql
 |
   --
  |
  v
schema-460to461.sql -> schema-460to461-cleanup.sql
 |
   --
  |
  v
schema-461to470.sql -> schema-461to470-cleanup.sql
 |
   --
  |
  v
schema-470to471.sql -> schema-470to471-cleanup.sql
```

Since **case B** is that most developer would expect when fixing bugs and 
doing changes, but **case A** is the most common case of production upgrade, I 
wanted to correct the algorithm so that everyone will follow the same route 
(case B).

Most `-cleanup.sql` scripts are either empty or only updating the 
`configuration` table, so it's safe. There is only one possible problematic 
script: 
https://github.com/apache/cloudstack/blob/master/setup/db/db/schema-481to490-cleanup.sql
 today. This one does change views, which IMO was a mistake to put in the 
cleanup script file, it should have gone into `schema-481to490.sql` (@rhtyd ?). 
Leaving the mechanism as it is today would leave people with a possible bug 
while upgrading from any version prior to 4.9.0 *if* any future SQL script was 
to change the views modified inside `schema-481to490-cleanup.sql` because of 
scenario case A. Did I lost people there?

Any comment @remibergsma @DaanHoogland @syed @nvazquez ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1959: CLOUDSTACK-9786:API reference guide entry for...

2017-02-21 Thread Ashadeepa
GitHub user Ashadeepa opened a pull request:

https://github.com/apache/cloudstack/pull/1959

CLOUDSTACK-9786:API reference guide entry for associateIpAddress needs 
additional information

Going through the code & implementation, it seems like either of the 
parameters are not required while accessing the API : associateIpAddress.
There are 3 cases for which this api works. 1) networkId 2) vpcId 3) 
zoneId. Either of these can be provided to achieve the same functionality. If 
neither of them is provided, there is an error text shown.
E.g.
[root@CCP ~]# curl -s 
'http://10.66.43.37:8096/client/api?command=associateIpAddress=true' | 
xmllint --format - -o


431
4350
Unable to figure out zone to assign ip to. Please specify either 
zoneId, or networkId, or vpcId in the call

Modify the API reference guide entry with this detail in the "description"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9786

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1959.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1959


commit 030d34dca89621965afa2043a78a165a21adc26e
Author: Ashadeepa Debnath 
Date:   2017-02-21T11:29:02Z

CLOUDSTACK-9786:API reference guide entry for associateIpAddress needs a fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
Trillian test result (tid-876)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35807 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1935-t876-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_network.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 864.13 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 320.45 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 160.52 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 61.11 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.72 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 287.25 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 545.04 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 512.25 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1414.74 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 548.99 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1297.58 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.41 | test_volumes.py
test_08_resize_volume | Success | 156.44 | test_volumes.py
test_07_resize_fail | Success | 156.52 | test_volumes.py
test_06_download_detached_volume | Success | 156.34 | test_volumes.py
test_05_detach_volume | Success | 155.91 | test_volumes.py
test_04_delete_attached_volume | Success | 151.44 | test_volumes.py
test_03_download_attached_volume | Success | 151.32 | test_volumes.py
test_02_attach_volume | Success | 95.17 | test_volumes.py
test_01_create_volume | Success | 711.28 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.17 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.78 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.76 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 247.75 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.64 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.94 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.84 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.87 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.46 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 40.43 | test_templates.py
test_10_destroy_cpvm | Success | 166.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.56 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.57 | test_ssvm.py
test_07_reboot_ssvm | Success | 163.59 | test_ssvm.py
test_06_stop_cpvm | Success | 132.19 | test_ssvm.py
test_05_stop_ssvm | Success | 164.02 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 3.42 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.11 | test_snapshots.py
test_04_change_offering_small | Success | 210.27 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 

[GitHub] cloudstack issue #1768: CLOUDSTACK 9601: Upgrade: change logic for update pa...

2017-02-21 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1768
  
@serg38 Not sure to understand what you mean with:
> If I get it right with your proposed changes, upgrade scripts become 
obsolete since all the changes can be done in upgrade scripts.

You meant: *If I get it right with your proposed changes, upgrade 
**cleanup** scripts become obsolete since all the changes can be done in 
upgrade scripts.* ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1957: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@rafaelweingartner : This is due to the change in my remote urls. Closing 
the old PR https://github.com/apache/cloudstack/issues/1910.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1910: CLOUDSTACK-9748:VPN Users search functionalit...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa closed the pull request at:

https://github.com/apache/cloudstack/pull/1910


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread nvazquez
Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@rafaelweingartner I think I got your point, I tried to keep code as 
similar as it was before, by declaring `rollBackState` as static class variable 
on line 114. This way inner `finally` block would work the same as before when 
one of new methods set `rollBackState = true.` On outter `finally` block, 
`rollBackState` is set to false (line 345), this way each time `deleteDomain` 
is invoked it would start on false (maybe it would be easier to move it at the 
beggining of `deleteDomain`). Do you agree with this approach?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
Trillian test result (tid-877)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 26001 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1953-t877-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 303.85 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.03 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 134.10 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 55.78 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 210.11 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 242.58 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 466.50 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 479.09 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1366.03 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 521.09 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 728.22 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1246.26 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.10 | test_volumes.py
test_08_resize_volume | Success | 156.06 | test_volumes.py
test_07_resize_fail | Success | 156.10 | test_volumes.py
test_06_download_detached_volume | Success | 155.99 | test_volumes.py
test_05_detach_volume | Success | 150.56 | test_volumes.py
test_04_delete_attached_volume | Success | 145.93 | test_volumes.py
test_03_download_attached_volume | Success | 155.98 | test_volumes.py
test_02_attach_volume | Success | 83.88 | test_volumes.py
test_01_create_volume | Success | 620.53 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.19 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.62 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 128.64 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 252.01 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.37 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.17 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.63 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.08 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 130.68 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.64 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.10 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.23 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.36 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 5.10 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.07 | test_templates.py
test_01_create_template | Success | 25.25 | test_templates.py
test_10_destroy_cpvm | Success | 161.28 | test_ssvm.py
test_09_destroy_ssvm | Success | 133.21 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.19 | test_ssvm.py
test_07_reboot_ssvm | Success | 102.98 | test_ssvm.py
test_06_stop_cpvm | Success | 131.35 | test_ssvm.py
test_05_stop_ssvm | Success | 133.05 | test_ssvm.py
test_04_cpvm_internals | Success | 0.98 | test_ssvm.py
test_03_ssvm_internals | Success | 2.85 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.10 | test_ssvm.py
test_01_snapshot_root_disk | Success | 10.83 | test_snapshots.py
test_04_change_offering_small | Success | 204.33 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.09 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.13 | test_secondary_storage.py
test_09_reboot_router | Success | 35.23 | test_routers.py
test_08_start_router | Success | 25.19 | 

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy  wrote:

> Actually, minor correction. When adding to VM/templates the name of the  
> detail is rootDiskController for Root controller and dataDiskController  
> for additional disks.
> Also, if you want to make changes on a global scale the changes need to  
> go to vm_template_details and user_vm_details tables respectively.

Thanks!  Very helpful

>
> On 2/21/17, 8:03 PM, "Sergey Levitskiy"   
> wrote:
>
> Here it is the logic.
> 1. Default value is taken from a global configuration 
> vmware.root.disk.controller 
> 2. To override add the same config to template or VM (starting from 4.10 
> UI allows adding advanced settings to templates and/or VMs). If added to a 
> template all VMs deployed from it will inherit this value. If added to VM and 
> then template is created it will also inherits all advanced settings.
>
>
>
>
> On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:
>
> Sergey Levitskiy  wrote:
>
>> On vmware rootdiskcontroller is passed over to the hypervisor in VM start
>> command. I know for the fact that the following rootdiskcontroller option
>> specified in template/vm details work fine:
>> ide
>> scsi
>> lsilogic
>> lsilogic1068
>>
>> In general, any scsi controller option that vmware recognizes should work.
>>
>> Thanks,
>> Sergey
>
> Thanks Sergey!  So do you happen to know where on the management 
> server
> side the determination is made as to which rootDiskController 
> parameter to
> pass?
>
>
>
>
>> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>>
>> Wido den Hollander  wrote:
>>
 Op 25 januari 2017 om 4:44 schreef Simon Weller :


 Maybe this is a good opportunity to discuss modernizing the OS
 selections so that drivers (and other features) could be selectable per
 OS.
>>>
>>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>>> previous to that will get VirtIO-blk.
>>
>> So one thing I noticed, there is a possibility of a rootDiskController
>> parameter passed to the Start Command.  So this means that the Management
>> server could control whether to use scsi or virtio, assuming I’m reading
>> this correctly, and we shouldn’t necessarily have to rely on the os type
>> name inside the agent code.  From a quick glance at the vmware code, it
>> looks like maybe they already use this parameter?  It would be great if
>> someone familiar with the vmware code could chime in here.
>>
>> Thanks,
>>
>> Nathan
>>
>>
>>
>>> Wido
>>>
 Thoughts?


 
 From: Syed Ahmed 
 Sent: Tuesday, January 24, 2017 10:46 AM
 To: dev@cloudstack.apache.org
 Cc: Simon Weller
 Subject: Re: Adding VirtIO SCSI to KVM hypervisors

 To maintain backward compatibility we would have to add a config option
 here unfortunately. I do like the idea however. We can make the default
 VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
 installations.

 On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
 > wrote:

> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
> >:
>
>
>
>
>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>> > het volgende
>> geschreven:
>>
>> Exposing this via an API would be tricky but it can definitely be
>> added as
>> a cluster-wide or a global setting in my opinion. By enabling that,
>> all the
>> instances would be using VirtIO SCSI. Is there a reason you'd want  
>> some
>> instances to use VirtIIO and others to use VirtIO SCSI?
>
> Even a global setting would be a bit of work and hacky as well.
>
> I do not see any reason to keep VirtIO, it os just that devices will be
> named sdX instead of vdX in the guest.

 To add, the Qemu wiki [0] says:

 "A virtio storage interface for efficient I/O that overcomes virtio-blk
 limitations and supports advanced SCSI hardware."

 At OpenStack [1] they also say:

 "It has been designed to replace virtio-blk, increase it's performance
 and improve scalability."

 So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
 SCSI at version 5.X? :)

 Wido

 [0]: http://wiki.qemu.org/Features/VirtioSCSI
 [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi

> That might break existing Instances when not using labels or UUIDs in
> the Instance when mounting.
>
> Wido
>
>>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>>> 

[GitHub] cloudstack issue #1873: CLOUDSTACK-9709: Updated the vm ip fetch task to use...

2017-02-21 Thread jayapalu
Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1873
  
There are not failed test cases on CI run.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
Actually, minor correction. When adding to VM/templates the name of the detail 
is rootDiskController for Root controller and dataDiskController for additional 
disks.
Also, if you want to make changes on a global scale the changes need to go to 
vm_template_details and user_vm_details tables respectively.

On 2/21/17, 8:03 PM, "Sergey Levitskiy"  wrote:

Here it is the logic.
1. Default value is taken from a global configuration 
vmware.root.disk.controller   
2. To override add the same config to template or VM (starting from 4.10 UI 
allows adding advanced settings to templates and/or VMs). If added to a 
template all VMs deployed from it will inherit this value. If added to VM and 
then template is created it will also inherits all advanced settings.




On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:

Sergey Levitskiy  wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM 
start  
> command. I know for the fact that the following rootdiskcontroller 
option  
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should 
work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server 
 
side the determination is made as to which rootDiskController parameter 
to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>
> Wido den Hollander  wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable 
per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a 
rootDiskController
> parameter passed to the Start Command.  So this means that the 
Management
> server could control whether to use scsi or virtio, assuming I’m 
reading
> this correctly, and we shouldn’t necessarily have to rely on the 
os type
> name inside the agent code.  From a quick glance at the vmware 
code, it
> looks like maybe they already use this parameter?  It would be 
great if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
>>>
>>>
>>> 
>>> From: Syed Ahmed 
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config 
option
>>> here unfortunately. I do like the idea however. We can make the 
default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> > wrote:
>>>
 Op 21 januari 2017 om 23:50 schreef Wido den Hollander
 >:




> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
> > het volgende
> geschreven:
>
> Exposing this via an API would be tricky but it can definitely be
> added as
> a cluster-wide or a global setting in my opinion. By enabling 
that,
> all the
> instances would be using VirtIO SCSI. Is there a reason you'd 
want some
> instances to use VirtIIO and others to use VirtIO SCSI?

 Even a global setting would be a bit of work and hacky as well.

 I do not see any reason to keep VirtIO, it os just that devices 
will be
 named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes 
virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> 

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy  wrote:

> Here it is the logic.
> 1. Default value is taken from a global configuration  
> vmware.root.disk.controller   
> 2. To override add the same config to template or VM (starting from 4.10  
> UI allows adding advanced settings to templates and/or VMs). If added to  
> a template all VMs deployed from it will inherit this value. If added to  
> VM and then template is created it will also inherits all advanced  
> settings.
>

Excellent, thanks.  Do you happen to know where this is stored in the  
database?

Thanks again!

>
>
>
> On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:
>
> Sergey Levitskiy  wrote:
>
>> On vmware rootdiskcontroller is passed over to the hypervisor in VM start
>> command. I know for the fact that the following rootdiskcontroller option
>> specified in template/vm details work fine:
>> ide
>> scsi
>> lsilogic
>> lsilogic1068
>>
>> In general, any scsi controller option that vmware recognizes should work.
>>
>> Thanks,
>> Sergey
>
> Thanks Sergey!  So do you happen to know where on the management server
> side the determination is made as to which rootDiskController parameter to
> pass?
>
>
>
>
>> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>>
>> Wido den Hollander  wrote:
>>
 Op 25 januari 2017 om 4:44 schreef Simon Weller :


 Maybe this is a good opportunity to discuss modernizing the OS
 selections so that drivers (and other features) could be selectable per
 OS.
>>>
>>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>>> previous to that will get VirtIO-blk.
>>
>> So one thing I noticed, there is a possibility of a rootDiskController
>> parameter passed to the Start Command.  So this means that the Management
>> server could control whether to use scsi or virtio, assuming I’m reading
>> this correctly, and we shouldn’t necessarily have to rely on the os type
>> name inside the agent code.  From a quick glance at the vmware code, it
>> looks like maybe they already use this parameter?  It would be great if
>> someone familiar with the vmware code could chime in here.
>>
>> Thanks,
>>
>> Nathan
>>
>>
>>
>>> Wido
>>>
 Thoughts?


 
 From: Syed Ahmed 
 Sent: Tuesday, January 24, 2017 10:46 AM
 To: dev@cloudstack.apache.org
 Cc: Simon Weller
 Subject: Re: Adding VirtIO SCSI to KVM hypervisors

 To maintain backward compatibility we would have to add a config option
 here unfortunately. I do like the idea however. We can make the default
 VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
 installations.

 On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
 > wrote:

> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
> >:
>
>
>
>
>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>> > het volgende
>> geschreven:
>>
>> Exposing this via an API would be tricky but it can definitely be
>> added as
>> a cluster-wide or a global setting in my opinion. By enabling that,
>> all the
>> instances would be using VirtIO SCSI. Is there a reason you'd want  
>> some
>> instances to use VirtIIO and others to use VirtIO SCSI?
>
> Even a global setting would be a bit of work and hacky as well.
>
> I do not see any reason to keep VirtIO, it os just that devices will be
> named sdX instead of vdX in the guest.

 To add, the Qemu wiki [0] says:

 "A virtio storage interface for efficient I/O that overcomes virtio-blk
 limitations and supports advanced SCSI hardware."

 At OpenStack [1] they also say:

 "It has been designed to replace virtio-blk, increase it's performance
 and improve scalability."

 So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
 SCSI at version 5.X? :)

 Wido

 [0]: http://wiki.qemu.org/Features/VirtioSCSI
 [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi

> That might break existing Instances when not using labels or UUIDs in
> the Instance when mounting.
>
> Wido
>
>>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>>> > wrote:
>>>
>>> For the record, we've been looking into this as well.
>>> Has anyone tried it with Windows VMs before? The standard virtio
>>> driver
>>> doesn't support spanned disks and that's something we'd really like  
>>> to
>>> enable for our customers.
>>>
>>>
>>>
>>> 

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
Here it is the logic.
1. Default value is taken from a global configuration 
vmware.root.disk.controller   
2. To override add the same config to template or VM (starting from 4.10 UI 
allows adding advanced settings to templates and/or VMs). If added to a 
template all VMs deployed from it will inherit this value. If added to VM and 
then template is created it will also inherits all advanced settings.




On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:

Sergey Levitskiy  wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM start 
 
> command. I know for the fact that the following rootdiskcontroller option 
 
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server  
side the determination is made as to which rootDiskController parameter to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>
> Wido den Hollander  wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a rootDiskController
> parameter passed to the Start Command.  So this means that the 
Management
> server could control whether to use scsi or virtio, assuming I’m 
reading
> this correctly, and we shouldn’t necessarily have to rely on the os 
type
> name inside the agent code.  From a quick glance at the vmware code, 
it
> looks like maybe they already use this parameter?  It would be great 
if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
>>>
>>>
>>> 
>>> From: Syed Ahmed 
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config option
>>> here unfortunately. I do like the idea however. We can make the default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> > wrote:
>>>
 Op 21 januari 2017 om 23:50 schreef Wido den Hollander
 >:




> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
> > het volgende
> geschreven:
>
> Exposing this via an API would be tricky but it can definitely be
> added as
> a cluster-wide or a global setting in my opinion. By enabling that,
> all the
> instances would be using VirtIO SCSI. Is there a reason you'd want 
some
> instances to use VirtIIO and others to use VirtIO SCSI?

 Even a global setting would be a bit of work and hacky as well.

 I do not see any reason to keep VirtIO, it os just that devices will be
 named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> At OpenStack [1] they also say:
>>>
>>> "It has been designed to replace virtio-blk, increase it's performance
>>> and improve scalability."
>>>
>>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
>>> SCSI at version 5.X? :)
>>>
>>> Wido
>>>
>>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>>
 That might break existing Instances when not using labels or UUIDs in
 the Instance when mounting.

 Wido

>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>> > wrote:
>>
>> For the record, we've been looking into this as well.
>> Has anyone tried it with Windows VMs before? The 

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy  wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM start  
> command. I know for the fact that the following rootdiskcontroller option  
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server  
side the determination is made as to which rootDiskController parameter to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>
> Wido den Hollander  wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a rootDiskController
> parameter passed to the Start Command.  So this means that the Management
> server could control whether to use scsi or virtio, assuming I’m reading
> this correctly, and we shouldn’t necessarily have to rely on the os type
> name inside the agent code.  From a quick glance at the vmware code, it
> looks like maybe they already use this parameter?  It would be great if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
>>>
>>>
>>> 
>>> From: Syed Ahmed 
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config option
>>> here unfortunately. I do like the idea however. We can make the default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> > wrote:
>>>
 Op 21 januari 2017 om 23:50 schreef Wido den Hollander
 >:




> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
> > het volgende
> geschreven:
>
> Exposing this via an API would be tricky but it can definitely be
> added as
> a cluster-wide or a global setting in my opinion. By enabling that,
> all the
> instances would be using VirtIO SCSI. Is there a reason you'd want some
> instances to use VirtIIO and others to use VirtIO SCSI?

 Even a global setting would be a bit of work and hacky as well.

 I do not see any reason to keep VirtIO, it os just that devices will be
 named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> At OpenStack [1] they also say:
>>>
>>> "It has been designed to replace virtio-blk, increase it's performance
>>> and improve scalability."
>>>
>>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
>>> SCSI at version 5.X? :)
>>>
>>> Wido
>>>
>>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>>
 That might break existing Instances when not using labels or UUIDs in
 the Instance when mounting.

 Wido

>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>> > wrote:
>>
>> For the record, we've been looking into this as well.
>> Has anyone tried it with Windows VMs before? The standard virtio  
>> driver
>> doesn't support spanned disks and that's something we'd really like to
>> enable for our customers.
>>
>>
>>
>> Simon Weller/615-312-6068 <(615)%20312-6068>
>>
>>
>> -Original Message-
>> *From:* Wido den Hollander [w...@widodh.nl]
>> *Received:* Saturday, 21 Jan 2017, 2:56PM
>> *To:* Syed Ahmed [sah...@cloudops.com];
>> dev@cloudstack.apache.org [
>> dev@cloudstack.apache.org]
>> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>>
>>
>>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed
>>> >:
>>>
>>>
>>> Wido,
>>>
>>> Were you thinking of adding this as a global setting? I can see why  
>>> it
>> will
>>> be useful. I'm happy to review any ideas you might 

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
On vmware rootdiskcontroller is passed over to the hypervisor in VM start 
command. I know for the fact that the following rootdiskcontroller option 
specified in template/vm details work fine:
ide
scsi
lsilogic
lsilogic1068

In general, any scsi controller option that vmware recognizes should work.

Thanks,
Sergey


On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:

Wido den Hollander  wrote:

>
>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>
>>
>> Maybe this is a good opportunity to discuss modernizing the OS  
>> selections so that drivers (and other features) could be selectable per  
>> OS.
>
> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3  
> then for example it will give you a VirtIO SCSI disk on KVM, anything  
> previous to that will get VirtIO-blk.

So one thing I noticed, there is a possibility of a rootDiskController  
parameter passed to the Start Command.  So this means that the Management  
server could control whether to use scsi or virtio, assuming I’m reading  
this correctly, and we shouldn’t necessarily have to rely on the os type  
name inside the agent code.  From a quick glance at the vmware code, it  
looks like maybe they already use this parameter?  It would be great if  
someone familiar with the vmware code could chime in here.

Thanks,

Nathan



>
> Wido
>
>> Thoughts?
>>
>>
>> 
>> From: Syed Ahmed 
>> Sent: Tuesday, January 24, 2017 10:46 AM
>> To: dev@cloudstack.apache.org
>> Cc: Simon Weller
>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>
>> To maintain backward compatibility we would have to add a config option  
>> here unfortunately. I do like the idea however. We can make the default  
>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing  
>> installations.
>>
>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander  
>> > wrote:
>>
>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander  
>>> >:
>>>
>>>
>>>
>>>
 Op 21 jan. 2017 om 22:59 heeft Syed Ahmed  
 > het volgende  
 geschreven:

 Exposing this via an API would be tricky but it can definitely be  
 added as
 a cluster-wide or a global setting in my opinion. By enabling that,  
 all the
 instances would be using VirtIO SCSI. Is there a reason you'd want some
 instances to use VirtIIO and others to use VirtIO SCSI?
>>>
>>> Even a global setting would be a bit of work and hacky as well.
>>>
>>> I do not see any reason to keep VirtIO, it os just that devices will be 
 
>>> named sdX instead of vdX in the guest.
>>
>> To add, the Qemu wiki [0] says:
>>
>> "A virtio storage interface for efficient I/O that overcomes virtio-blk  
>> limitations and supports advanced SCSI hardware."
>>
>> At OpenStack [1] they also say:
>>
>> "It has been designed to replace virtio-blk, increase it's performance  
>> and improve scalability."
>>
>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO 
 
>> SCSI at version 5.X? :)
>>
>> Wido
>>
>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>
>>> That might break existing Instances when not using labels or UUIDs in  
>>> the Instance when mounting.
>>>
>>> Wido
>>>
> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  
> > wrote:
>
> For the record, we've been looking into this as well.
> Has anyone tried it with Windows VMs before? The standard virtio 
driver
> doesn't support spanned disks and that's something we'd really like to
> enable for our customers.
>
>
>
> Simon Weller/615-312-6068 <(615)%20312-6068>
>
>
> -Original Message-
> *From:* Wido den Hollander [w...@widodh.nl]
> *Received:* Saturday, 21 Jan 2017, 2:56PM
> *To:* Syed Ahmed [sah...@cloudops.com];  
> dev@cloudstack.apache.org [
> dev@cloudstack.apache.org]
> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>
>
>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed  
>> >:
>>
>>
>> Wido,
>>
>> Were you thinking of adding 

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Wido den Hollander  wrote:

>
>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>
>>
>> Maybe this is a good opportunity to discuss modernizing the OS  
>> selections so that drivers (and other features) could be selectable per  
>> OS.
>
> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3  
> then for example it will give you a VirtIO SCSI disk on KVM, anything  
> previous to that will get VirtIO-blk.

So one thing I noticed, there is a possibility of a rootDiskController  
parameter passed to the Start Command.  So this means that the Management  
server could control whether to use scsi or virtio, assuming I’m reading  
this correctly, and we shouldn’t necessarily have to rely on the os type  
name inside the agent code.  From a quick glance at the vmware code, it  
looks like maybe they already use this parameter?  It would be great if  
someone familiar with the vmware code could chime in here.

Thanks,

Nathan



>
> Wido
>
>> Thoughts?
>>
>>
>> 
>> From: Syed Ahmed 
>> Sent: Tuesday, January 24, 2017 10:46 AM
>> To: dev@cloudstack.apache.org
>> Cc: Simon Weller
>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>
>> To maintain backward compatibility we would have to add a config option  
>> here unfortunately. I do like the idea however. We can make the default  
>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing  
>> installations.
>>
>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander  
>> > wrote:
>>
>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander  
>>> >:
>>>
>>>
>>>
>>>
 Op 21 jan. 2017 om 22:59 heeft Syed Ahmed  
 > het volgende  
 geschreven:

 Exposing this via an API would be tricky but it can definitely be  
 added as
 a cluster-wide or a global setting in my opinion. By enabling that,  
 all the
 instances would be using VirtIO SCSI. Is there a reason you'd want some
 instances to use VirtIIO and others to use VirtIO SCSI?
>>>
>>> Even a global setting would be a bit of work and hacky as well.
>>>
>>> I do not see any reason to keep VirtIO, it os just that devices will be  
>>> named sdX instead of vdX in the guest.
>>
>> To add, the Qemu wiki [0] says:
>>
>> "A virtio storage interface for efficient I/O that overcomes virtio-blk  
>> limitations and supports advanced SCSI hardware."
>>
>> At OpenStack [1] they also say:
>>
>> "It has been designed to replace virtio-blk, increase it's performance  
>> and improve scalability."
>>
>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO  
>> SCSI at version 5.X? :)
>>
>> Wido
>>
>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>
>>> That might break existing Instances when not using labels or UUIDs in  
>>> the Instance when mounting.
>>>
>>> Wido
>>>
> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  
> > wrote:
>
> For the record, we've been looking into this as well.
> Has anyone tried it with Windows VMs before? The standard virtio driver
> doesn't support spanned disks and that's something we'd really like to
> enable for our customers.
>
>
>
> Simon Weller/615-312-6068 <(615)%20312-6068>
>
>
> -Original Message-
> *From:* Wido den Hollander [w...@widodh.nl]
> *Received:* Saturday, 21 Jan 2017, 2:56PM
> *To:* Syed Ahmed [sah...@cloudops.com];  
> dev@cloudstack.apache.org [
> dev@cloudstack.apache.org]
> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>
>
>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed  
>> >:
>>
>>
>> Wido,
>>
>> Were you thinking of adding this as a global setting? I can see why it
> will
>> be useful. I'm happy to review any ideas you might have around this.
>
> Well, not really. We don't have any structure for this in place right  
> now
> to define what type of driver/disk we present to a guest.
>
> See my answer below.
>
>> Thanks,
>> -Syed
>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak  
>> >
>> wrote:
>>
>>> Hi Wido,
>>>
>>> If I understand correctly from the documentation and your examples,
> virtio
>>> provides virtio interface to the guest while virtio-scsi provides  
>>> scsi
>>> interface, therefore an IaaS service should not replace it without  
>>> user
>>> request / approval. It would be probably better to let the user set
> what

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

2017-02-21 Thread sateesh-chodapuneedi
Github user sateesh-chodapuneedi commented on the issue:

https://github.com/apache/cloudstack/pull/1861
  
ping @karuturi @koushik-das 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102342529
  
--- Diff: 
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java 
---
@@ -584,18 +584,36 @@ public void defFileBasedDisk(String filePath, String 
diskLabel, DiskBus bus, Dis
 
 /* skip iso label */
 private String getDevLabel(int devId, DiskBus bus) {
--- End diff --

Would be great to have unit tests for either `getDevLabel(int devId, 
DiskBus bus)` or `getDevLabelSuffix(int deviceIndex)`, especially to test for 
the expected results when `devId` (or `deviceIndex`) are high enough to return 
a double-letter device label suffix.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102336557
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
--- End diff --

@sureshanaparti If the condition really should be `i < 
maxDataVolumesSupported` (which would make the maximum device id returned by 
the method be `maxDataVolumesSupported - 1`), then the check + error message 
above
```
if (deviceId.longValue() <= 0 || deviceId.longValue() > 
maxDataVolumesSupported || deviceId.longValue() == 3) {
throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
```
need to be changed so as not to include the value of 
`maxDataVolumesSupported` itself.
Otherwise, when `maxDataVolumesSupported` has value `6` (for example), the 
method would not ever return `6` when parameter `deviceId` is specified as 
`null` but would return `6` when parameter `deviceId` is specified as `6` 
(assuming device id `6` is not already in use).  Also, the error message would 
state "deviceId should be 1,2,4-6" whenever parameter `deviceId` would be 
specified as an invalid value, which would not be correct (as `5` should be the 
highest valid device id).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1875: CLOUDSTACK-8608: [VMware] System VMs failed to start...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
@sateesh-chodapuneedi @rhtyd  Please review the changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1875: CLOUDSTACK-8608: [VMware] System VMs failed to start...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
@rhtyd Thanks for running tests. The test failures/errors above are failing 
in other PRs as well, not related to the changes in this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1954: CLOUDSTACK-9795: moved logrotate from cron.daily to ...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1954
  
tag:mergeready



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@remibergsma Same MAC for RVR has been re-introducted as part of 
[CLOUDSTACK-985](https://issues.apache.org/jira/browse/CLOUDSTACK-985). It 
confirms that peer NICs of RVRs should have same MAC addresses. Only default 
public NIC was configured with same MAC. For VMware, there exists additional 
public NICs which were not configured with same MAC addresses.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
@serbaut Can you do a force push to kick off jenkins again.  I'm guessing 
Jenkins just had an issue and not the PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@nvazquez great work.
However, there is a catch there that I think you might have overlooked. 
This problem is caused by the method extraction I suggested.

If you take a look at the code before the extraction, every time that an 
exception is thrown, the code was setting the variable `rollBackState = true`. 
This happens at lines 287, 305, and 313. Now that the code was extracted, 
setting those variables to `true` does not work anymore, because of the context 
those variables are declared change.

In my opinion, this code was kind of weird before. It was throwing an 
exception that is caught right away and setting a control variable to be 
executed on `finally` block. The only reason I see for this is that if other 
exceptions that are not the ones generated at lines 292, 310, and 325 happen, 
and we do not want to execute the rollback for them. However, this seems error 
prone, leading to database inconsistencies.

I would change the "rollback" code (lines 342-345) to the catch block.

I do not know if I have been clear, we can discuss this further. I may have 
overlooked some bits of it as well (it is a quite complicated bit of code).



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1879: CLOUDSTACK-9719: [VMware] VR loses DHCP settings and...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@sateesh-chodapuneedi @rhtyd  Please review the code changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1879: CLOUDSTACK-9719: [VMware] VR loses DHCP settings and...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@rhtyd Thanks for running these test. The failures/errors are not related 
to this PR changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1773: CLOUDSTACK-9607: Preventing template deletion when t...

2017-02-21 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
@jburwell That was default behavior for few years to allow deletion of the 
template even if active VMs exist. Deletion of the template on secondary 
doesn't remove the template copy on primary storage so all existing VM function 
work just fine. 
From my prospective if we allow forced deletion from the UI I am fine with 
switching default to forced=no and documenting it in Release Notes. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread nvazquez
Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@rafaelweingartner thanks for reviewing! I extracted code to new methods 
and also added unit tests for them


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1897: CLOUDSTACK-9733: Concurrent volume snapshots of a VM...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@koushik-das @kishankavala  Please review the changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1897: CLOUDSTACK-9733: Concurrent volume snapshots of a VM...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@ramkatru Checked and addressed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-521


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1768: CLOUDSTACK 9601: Upgrade: change logic for update pa...

2017-02-21 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1768
  
@marcaurele I tend to agree with @rhtydthat it could break some of the 
upgrades. If I get it right with your proposed changes, upgrade scripts become 
obsolete since all the changes can be done in upgrade scripts.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102310842
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I thank you


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1875: CLOUDSTACK-8608: [VMware] System VMs failed to start...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
Trillian test result (tid-867)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 48400 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1875-t867-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 872.15 | 
test_privategw_acl.py
test_02_redundant_VPC_default_routes | `Error` | 237.39 | 
test_vpc_redundant.py
test_02_list_snapshots_with_removed_data_store | `Error` | 75.67 | 
test_snapshots.py
test_02_list_snapshots_with_removed_data_store | `Error` | 80.75 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 375.63 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 186.40 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 596.74 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 368.38 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 750.75 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 665.98 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1527.05 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 691.66 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1373.64 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.62 | test_volumes.py
test_06_download_detached_volume | Success | 70.58 | test_volumes.py
test_05_detach_volume | Success | 100.22 | test_volumes.py
test_04_delete_attached_volume | Success | 15.17 | test_volumes.py
test_03_download_attached_volume | Success | 25.26 | test_volumes.py
test_02_attach_volume | Success | 58.86 | test_volumes.py
test_01_create_volume | Success | 519.45 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 534.17 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.17 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 227.10 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 191.26 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.62 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 216.81 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.68 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.18 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 101.03 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.14 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.10 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.11 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 241.44 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 10.16 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 120.87 | test_templates.py
test_10_destroy_cpvm | Success | 236.61 | test_ssvm.py
test_09_destroy_ssvm | Success | 268.53 | test_ssvm.py
test_08_reboot_cpvm | Success | 366.63 | test_ssvm.py
test_07_reboot_ssvm | Success | 308.41 | test_ssvm.py
test_06_stop_cpvm | Success | 176.51 | test_ssvm.py
test_05_stop_ssvm | Success | 173.30 | test_ssvm.py
test_04_cpvm_internals | Success | 1.02 | test_ssvm.py
test_03_ssvm_internals | Success | 3.32 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 66.15 | test_snapshots.py
test_04_change_offering_small | Success | 92.44 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | 

[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@borisstoyanov Can you kick off Jenkins job on this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102311697
  
--- Diff: 
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java 
---
@@ -716,11 +734,6 @@ public DiskFmtType getDiskFormatType() {
 return _diskFmtType;
 }
 
--- End diff --

Removed unused method _getDiskSeq()_.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@remibergsma @borisstoyanov Updated the KVM code to generate the valid 
device name above id 25.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread nathanejohnson
Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102310032
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Updated, thanks for the input


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102309898
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
--- End diff --

@rafaelweingartner when configuring getMaxDataVolumesSupported(vm) with 6 
for the hypervisor of the VM, the VM can have max 6 devices. 1 root (id 0), 1 
CD-ROM (id 3) and other 4 for extra disks/volumes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102309517
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Yes, I think this is more readable.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread nathanejohnson
Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102309146
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Do you think something like:

if (!result && work != null) {

would be better?  Even if work.getStep() did return a null, that should 
have the same effect as before.  Maybe it would be more readable too?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1879: CLOUDSTACK-9719: [VMware] VR loses DHCP settings and...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
Trillian test result (tid-869)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 46839 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1879-t869-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 880.24 | 
test_privategw_acl.py
test_01_vpc_privategw_acl | `Failure` | 116.61 | test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 116.23 | 
test_snapshots.py
test_02_list_snapshots_with_removed_data_store | `Error` | 121.33 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 381.80 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 161.82 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 613.06 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 360.18 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 732.40 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 666.99 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1538.86 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 740.52 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 647.18 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1381.47 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 25.77 | test_volumes.py
test_06_download_detached_volume | Success | 60.63 | test_volumes.py
test_05_detach_volume | Success | 100.29 | test_volumes.py
test_04_delete_attached_volume | Success | 10.20 | test_volumes.py
test_03_download_attached_volume | Success | 15.50 | test_volumes.py
test_02_attach_volume | Success | 63.80 | test_volumes.py
test_01_create_volume | Success | 509.73 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 564.79 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.20 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 232.23 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 332.79 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.70 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 263.25 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.11 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 81.16 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 5.13 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.15 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.25 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.18 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 226.89 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.09 | test_templates.py
test_04_extract_template | Success | 10.46 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.19 | test_templates.py
test_01_create_template | Success | 126.04 | test_templates.py
test_10_destroy_cpvm | Success | 236.84 | test_ssvm.py
test_09_destroy_ssvm | Success | 269.03 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.81 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.59 | test_ssvm.py
test_06_stop_cpvm | Success | 206.88 | test_ssvm.py
test_05_stop_ssvm | Success | 183.88 | test_ssvm.py
test_04_cpvm_internals | Success | 1.20 | test_ssvm.py
test_03_ssvm_internals | Success | 3.47 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.16 | test_ssvm.py
test_01_snapshot_root_disk | Success | 26.20 | test_snapshots.py
test_04_change_offering_small | Success | 97.25 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | 

[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102308166
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I am not asking to make a distinction between exception or not. What I 
tried to say is that, if the intent/purpose of the `finally` block was only to 
revert the step to a previous state when exceptions occur, we could do that 
using a `catch` block. I think the finally here is meant to revert the state of 
work step even if an exception does not happen, for instance when ` 
stateTransitTo ` returns `false`.

I think you already answered my doubt; when you said that the ` 
previousStep ` is most likely never to be `null`. I thought we could have cases 
where ` previousStep == null`, and then if the ` stateTransitTo` returns false, 
with the newly added check at line 757, we would not update the step back to 
`null` for these cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread nathanejohnson
Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102305779
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I'm not sure I'm following.  From reading the code a bit, it looks like the 
only scenario where a false would be returned from stateTransitTo would be 
where the state was not properly persisted to the db.  No exception is thrown 
in that case.  In the current code, false *or* exception will try to revert.  
Also, currently previousStep would *probably* never be null - though if work is 
null this will cause another NPE on line 747 of current code.  In the PR, 
previousStep will *probably* only be null in the case where work is null.

What is the value of making a distinction from a false versus an exception 
in this case?  Me adding a check for previousStep != null is simply to make 
sure that work is not null above.  I could also check work for null here too, 
but I don't think that's what you're getting at.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1773: CLOUDSTACK-9607: Preventing template deletion when t...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
Trillian test result (tid-875)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 27492 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1773-t875-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 325.34 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 149.78 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.19 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 226.19 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 259.80 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 518.66 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 510.72 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1302.14 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 533.12 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 754.46 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1284.66 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.34 | test_volumes.py
test_08_resize_volume | Success | 156.35 | test_volumes.py
test_07_resize_fail | Success | 156.41 | test_volumes.py
test_06_download_detached_volume | Success | 156.27 | test_volumes.py
test_05_detach_volume | Success | 155.83 | test_volumes.py
test_04_delete_attached_volume | Success | 146.14 | test_volumes.py
test_03_download_attached_volume | Success | 156.23 | test_volumes.py
test_02_attach_volume | Success | 124.17 | test_volumes.py
test_01_create_volume | Success | 711.19 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.72 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.70 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 272.73 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.62 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.31 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.84 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.82 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.85 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.16 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.29 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 35.39 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.14 | test_templates.py
test_03_delete_template | Success | 5.10 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 35.38 | test_templates.py
test_10_destroy_cpvm | Success | 136.43 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.10 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.51 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.59 | test_ssvm.py
test_06_stop_cpvm | Success | 131.69 | test_ssvm.py
test_05_stop_ssvm | Success | 133.64 | test_ssvm.py
test_04_cpvm_internals | Success | 1.39 | test_ssvm.py
test_03_ssvm_internals | Success | 3.76 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.12 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.11 | test_snapshots.py
test_04_change_offering_small | Success | 239.57 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.17 | test_secondary_storage.py
test_09_reboot_router | Success | 35.31 | test_routers.py
test_08_start_router | Success | 25.26 | 

[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102303566
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
 devIds.add(String.valueOf(i));
 }
 devIds.remove("3");
--- End diff --

Thanks. Added this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102303488
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
--- End diff --

@HrWiggles. Noted, not considering it for now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102303229
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
--- End diff --

@HrWiggles Thanks for pointing this. Addressed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1773: CLOUDSTACK-9607: Preventing template deletion when t...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
Trillian test result (tid-874)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 34092 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1773-t874-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 355.70 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 160.18 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 71.18 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 246.46 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 289.60 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 533.70 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 512.87 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1406.40 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 544.89 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 749.53 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1290.57 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.51 | test_volumes.py
test_08_resize_volume | Success | 156.44 | test_volumes.py
test_07_resize_fail | Success | 161.58 | test_volumes.py
test_06_download_detached_volume | Success | 156.31 | test_volumes.py
test_05_detach_volume | Success | 155.74 | test_volumes.py
test_04_delete_attached_volume | Success | 151.24 | test_volumes.py
test_03_download_attached_volume | Success | 156.34 | test_volumes.py
test_02_attach_volume | Success | 95.61 | test_volumes.py
test_01_create_volume | Success | 716.35 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.22 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 100.67 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.69 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 257.71 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.73 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.98 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.82 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.82 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.39 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.34 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 60.60 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 50.53 | test_templates.py
test_10_destroy_cpvm | Success | 166.62 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.72 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.61 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.52 | test_ssvm.py
test_06_stop_cpvm | Success | 131.71 | test_ssvm.py
test_05_stop_ssvm | Success | 133.80 | test_ssvm.py
test_04_cpvm_internals | Success | 1.19 | test_ssvm.py
test_03_ssvm_internals | Success | 3.36 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.11 | test_snapshots.py
test_04_change_offering_small | Success | 239.70 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.15 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_09_reboot_router | Success | 35.32 

[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102302052
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
--- End diff --

@HrWiggles Addressed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102301869
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
--- End diff --

@HrWiggles Will check if I can write a test for the same.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102301645
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
--- End diff --

@HrWiggles Thanks for the review. The max data volumes here is the actual 
hypervisor capability (which is posted in the db). The device id 3 is being 
reserved for something since long and I don't want that to be effected.
When _getMaxDataVolumesSupported()_ returns 6, max 5 volumes can be 
attached to the VM and one device reserved (might be for virtual tools/CDROM). 
_maxDataVolumesSupported_ specifies the data volumes limit supported by 
hypervisor, nothing related to _maxDeviceId_.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-520


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102300083
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

I know you did not write this code, but it seemed a good opportunity to 
discuss and evaluate it. 

I understood you. I already noticed the try/finally block, and this is the 
point I wanted to discuss. As the example you described, if an exception 
happens, the finally block is executed and the state is restored to a previous 
one (assuming that the `stateTransitTo(vm, event, hostId)` will change the 
step); and this makes sense in the case of an exception. However, if `NO` 
exception happens, the step is also reverted to a previous one (assuming that 
the `stateTransitTo(vm, event, hostId)` will change the step) . The `finally ` 
is always executed; either with successful or unsuccessful execution of 
`stateTransitTo(vm, event, hostId)`.

If we wanted to deal with exceptions, it would make much more sense 
executing the revert on a `catch` block. I think that we want/need to change 
the step for `null` when `stateTransitTo` return false and the `previousStep` 
is null . You are changing exactly that with the extra condition at line 757. 

Did you understand what I mean?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread nathanejohnson
Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102298974
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Oh, and to your point earlier, getStep() generally shouldn't ever return a 
null I don't *think* , because the step column in the op_it_work table is 
marked not null.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1861
  
yes @sateesh-chodapuneedi this failure has been a pain for a while... it'll 
be good to invest some time in fixing it..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread nathanejohnson
Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102295771
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Most of this code I didn't write, but I can make some guesses:

_workDao.updateStep(work, previousStep) line is in the finally block, which 
will execute even if an exception is thrown in stateTransitTo (like 
NoTransitException for instance).  So if stateTransitTo a) returns a false, or 
b) throw an exception, then result will be false, and line 758 will run.  So if 
something happens that the state isn't transitioned, someone wanted the work 
reverted to its previous step value.  Sort of a rollback maybe?

In the case of the VM hung in starting, my desired side effect is I want 
stateTransitTo to be called and set the state to Stopped , i.e., 
Event.AgentReportStopped -> State.Stopped .  The work has already expired at 
this point, so it is null.  I was trying to preserve the same behavior as 
before when work was not null.

Sorry if this wasn't very clear.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102294407
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
--- End diff --

I think this is a good question. I would add that the ID `0` is reserved 
for the root device.

So, I add the question if the `maxDataVolumesSupported` already accounts 
for the one already reserved for the root disk. 

For instance, when configuring `getMaxDataVolumesSupported(vm)` with `6` 
for the hypervisor of the VM. Does that mean that the VM can have up to 7 
devices? 1 root (id `0`), 1 CD-ROM (id `3`) and other 5 for extra disks/volumes.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102292787
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Ok, now I think I am starting to get it.
But I am still not sure about some things here, would you mind continue 
discussing?

If the work is not null, you get the previous step (let’s assume it is 
not null) and call the method ` _workDao.updateStep(work, step)`. After this, 
you call ` stateTransitTo(vm, event, hostId)`. Why do we need to call ` 
_workDao.updateStep(work, previousStep)` again? The ` previousStep ` continues 
to be the same.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102291066
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
--- End diff --

big 👍 for this request :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread nathanejohnson
Github user nathanejohnson commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102290456
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

@rafaelweingartner if work is null, previousStep will stay null.  Maybe not 
the clearest way to handle this, but this prevents a null work from being 
passed down below.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1956: CLOUDSTACK-9796 - Fix NPE in VirtualMachineMa...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1956#discussion_r102289931
  
--- Diff: 
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java ---
@@ -744,14 +744,17 @@ protected boolean checkWorkItems(final VMInstanceVO 
vm, final State state) throw
 
 protected  boolean changeState(final T vm, 
final Event event, final Long hostId, final ItWorkVO work, final Step step) 
throws NoTransitionException {
 // FIXME: We should do this better.
-final Step previousStep = work.getStep();
-_workDao.updateStep(work, step);
+Step previousStep = null;
+if (work != null) {
+previousStep = work.getStep();
--- End diff --

Can " work.getStep()" return null? 
I see that you add a check at line 757 `previousStep != null`. Why would we 
need that check there, and not need it here (line750)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1957: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@Ashadeepa why do we have 2 PRs for the same problem?
It seems that one of them can be closed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread remibergsma
Github user remibergsma commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@sureshanaparti why do the mac addresses need to be the same on both 
routers? We're also executing arpings to update our neighbours. Networking wise 
there is no need for them to be the same. I've seen it on other parts of the 
code as well and I really wonder why we do this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
Trillian test result (tid-873)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33766 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1878-t873-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 362.57 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 334.37 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.03 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 159.36 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 65.83 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 239.78 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 274.04 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 531.49 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 509.60 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1410.18 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 536.46 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 747.15 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.49 | test_volumes.py
test_08_resize_volume | Success | 156.71 | test_volumes.py
test_07_resize_fail | Success | 156.11 | test_volumes.py
test_06_download_detached_volume | Success | 155.99 | test_volumes.py
test_05_detach_volume | Success | 150.63 | test_volumes.py
test_04_delete_attached_volume | Success | 150.93 | test_volumes.py
test_03_download_attached_volume | Success | 156.02 | test_volumes.py
test_02_attach_volume | Success | 95.59 | test_volumes.py
test_01_create_volume | Success | 711.06 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.18 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.70 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.66 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 267.19 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.64 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.11 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.67 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.92 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 126.20 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.13 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.26 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 60.49 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.02 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 5.14 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 30.29 | test_templates.py
test_10_destroy_cpvm | Success | 131.42 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.49 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.51 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.41 | test_ssvm.py
test_06_stop_cpvm | Success | 131.56 | test_ssvm.py
test_05_stop_ssvm | Success | 133.54 | test_ssvm.py
test_04_cpvm_internals | Success | 1.14 | test_ssvm.py
test_03_ssvm_internals | Success | 3.29 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.08 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 10.94 | test_snapshots.py
test_04_change_offering_small | Success | 239.55 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.07 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 

[GitHub] cloudstack pull request #1935: CLOUDSTACK-9764: Delete domain failure due to...

2017-02-21 Thread nvazquez
Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102264841
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = networkIds.size() + " non-removed networks";
-} else if (hasDedicatedResources) {
-msg = "dedicated resources.";
+//don't delete the domain if there are accounts set 
for cleanup, or non-removed networks 

[GitHub] cloudstack pull request #1935: CLOUDSTACK-9764: Delete domain failure due to...

2017-02-21 Thread nvazquez
Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102265306
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
--- End diff --

Done, thanks @rafaelweingartner


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1935: CLOUDSTACK-9764: Delete domain failure due to...

2017-02-21 Thread nvazquez
Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102265150
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = networkIds.size() + " non-removed networks";
-} else if (hasDedicatedResources) {
-msg = "dedicated resources.";
+//don't delete the domain if there are accounts set 
for cleanup, or non-removed networks 

[GitHub] cloudstack issue #1948: [CLOUDSTACK-9793] Faster IP in subnet check

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1948
  
@ProjectMoon may I ask you a question?
The "net" object is already an array/map, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1949: Automated Cloudstack bugs 9277 9276 9275 9274 9273 9...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1949
  
Trillian test result (tid-872)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 29754 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1949-t872-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 310.79 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 160.32 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 56.14 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 255.75 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 287.17 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 515.32 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 503.85 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1426.01 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 532.65 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 729.61 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1264.82 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.54 | test_volumes.py
test_08_resize_volume | Success | 151.48 | test_volumes.py
test_07_resize_fail | Success | 156.49 | test_volumes.py
test_06_download_detached_volume | Success | 151.31 | test_volumes.py
test_05_detach_volume | Success | 150.79 | test_volumes.py
test_04_delete_attached_volume | Success | 151.51 | test_volumes.py
test_03_download_attached_volume | Success | 156.34 | test_volumes.py
test_02_attach_volume | Success | 89.22 | test_volumes.py
test_01_create_volume | Success | 621.29 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.22 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 100.77 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 159.75 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 263.39 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.94 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.21 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 36.01 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.15 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.86 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.88 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.14 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.45 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.06 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.15 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_01_create_template | Success | 50.48 | test_templates.py
test_10_destroy_cpvm | Success | 161.71 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.42 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.34 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.60 | test_ssvm.py
test_06_stop_cpvm | Success | 136.55 | test_ssvm.py
test_05_stop_ssvm | Success | 133.78 | test_ssvm.py
test_04_cpvm_internals | Success | 0.99 | test_ssvm.py
test_03_ssvm_internals | Success | 3.46 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.15 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.12 | test_snapshots.py
test_04_change_offering_small | Success | 239.66 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.15 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_09_reboot_router | Success | 35.33 | 

[GitHub] cloudstack issue #1948: [CLOUDSTACK-9793] Faster IP in subnet check

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1948
  
Trillian test result (tid-870)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33543 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1948-t870-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 874.34 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 379.02 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 330.77 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 160.11 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.25 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.97 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 298.25 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 526.31 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 506.37 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1402.83 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 566.05 | test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.61 | test_volumes.py
test_08_resize_volume | Success | 156.45 | test_volumes.py
test_07_resize_fail | Success | 161.63 | test_volumes.py
test_06_download_detached_volume | Success | 156.41 | test_volumes.py
test_05_detach_volume | Success | 156.70 | test_volumes.py
test_04_delete_attached_volume | Success | 151.31 | test_volumes.py
test_03_download_attached_volume | Success | 151.39 | test_volumes.py
test_02_attach_volume | Success | 96.19 | test_volumes.py
test_01_create_volume | Success | 717.60 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.60 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.63 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.89 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 248.34 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.72 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 30.86 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.98 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.88 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.40 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.55 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.08 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 45.46 | test_templates.py
test_10_destroy_cpvm | Success | 191.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 133.65 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.59 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.57 | test_ssvm.py
test_06_stop_cpvm | Success | 131.89 | test_ssvm.py
test_05_stop_ssvm | Success | 163.73 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 4.20 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.15 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.34 | test_snapshots.py
test_04_change_offering_small | Success | 210.37 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.06 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.07 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 35.30 | test_routers.py
test_08_start_router | Success | 30.29 | test_routers.py
test_07_stop_router | Success | 10.17 | test_routers.py

Re: Handling of DB migrations on forks

2017-02-21 Thread Rafael Weingärtner
Ah, great.

Thanks for clarifying that for me.

On Tue, Feb 21, 2017 at 11:44 AM, Daan Hoogland 
wrote:

> No Rafael the other way around,
>
> The first time you call it it might change things, but no matter how
> often you call it it will have changed in exactly the same way.
>
> On Tue, Feb 21, 2017 at 5:41 PM, Rafael Weingärtner
>  wrote:
> > I think this might be others doubt as well. Sorry if it seems a
> naïve/silly
> > question.
> >
> > By idempotent, I understand something that you can make as many
> operations
> > as possible and it does not change (broadly speaking). For example, (in
> > theory), when you do an HTTP Get requests, the response would be always
> the
> > same, and the state of the resource should not change.
> >
> > Now, regarding SQLs; this SQL for instance “CREATE TABLE XXX…..” is
> > idempotent for you (as far as I understood reading this thread), right?
> >
> > And something like “CREATE or REPLACE TABLE XXX…..” would be
> > non-idempotent. Did I understand it right?
> >
> > On Tue, Feb 21, 2017 at 11:28 AM, Daan Hoogland  >
> > wrote:
> >
> >> On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier <
> ma...@exoscale.ch>
> >> wrote:
> >> >
> >> > Daan, the project maintainers should enforce that. I also posted
> another
> >> > finding that the upgrade path are not identical due to the order in
> which
> >> > upgrade files are executed, see (https://github.com/apache/
> >> > cloudstack/pull/1768)
> >>
> >> If you mean refuse PRs containing non-idem-potent sql code yes, but as
> >> for real work it is all on a voluntary basis, that is someone must
> >> find it worth the time to encode it. I complete agree with a policy to
> >> refuse comntaining other creates and drop then as in
> >>
> >> > > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".
> >>
> >> So please feel free to speak up if you catch somebody trying to sneak
> >> in code like that. They have my -1
> >>
> >> --
> >> Daan
> >>
> >
> >
> >
> > --
> > Rafael Weingärtner
>
>
>
> --
> Daan
>



-- 
Rafael Weingärtner


Re: Handling of DB migrations on forks

2017-02-21 Thread Daan Hoogland
No Rafael the other way around,

The first time you call it it might change things, but no matter how
often you call it it will have changed in exactly the same way.

On Tue, Feb 21, 2017 at 5:41 PM, Rafael Weingärtner
 wrote:
> I think this might be others doubt as well. Sorry if it seems a naïve/silly
> question.
>
> By idempotent, I understand something that you can make as many operations
> as possible and it does not change (broadly speaking). For example, (in
> theory), when you do an HTTP Get requests, the response would be always the
> same, and the state of the resource should not change.
>
> Now, regarding SQLs; this SQL for instance “CREATE TABLE XXX…..” is
> idempotent for you (as far as I understood reading this thread), right?
>
> And something like “CREATE or REPLACE TABLE XXX…..” would be
> non-idempotent. Did I understand it right?
>
> On Tue, Feb 21, 2017 at 11:28 AM, Daan Hoogland 
> wrote:
>
>> On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier 
>> wrote:
>> >
>> > Daan, the project maintainers should enforce that. I also posted another
>> > finding that the upgrade path are not identical due to the order in which
>> > upgrade files are executed, see (https://github.com/apache/
>> > cloudstack/pull/1768)
>>
>> If you mean refuse PRs containing non-idem-potent sql code yes, but as
>> for real work it is all on a voluntary basis, that is someone must
>> find it worth the time to encode it. I complete agree with a policy to
>> refuse comntaining other creates and drop then as in
>>
>> > > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".
>>
>> So please feel free to speak up if you catch somebody trying to sneak
>> in code like that. They have my -1
>>
>> --
>> Daan
>>
>
>
>
> --
> Rafael Weingärtner



-- 
Daan


Re: Handling of DB migrations on forks

2017-02-21 Thread Rafael Weingärtner
I think this might be others doubt as well. Sorry if it seems a naïve/silly
question.

By idempotent, I understand something that you can make as many operations
as possible and it does not change (broadly speaking). For example, (in
theory), when you do an HTTP Get requests, the response would be always the
same, and the state of the resource should not change.

Now, regarding SQLs; this SQL for instance “CREATE TABLE XXX…..” is
idempotent for you (as far as I understood reading this thread), right?

And something like “CREATE or REPLACE TABLE XXX…..” would be
non-idempotent. Did I understand it right?

On Tue, Feb 21, 2017 at 11:28 AM, Daan Hoogland 
wrote:

> On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier 
> wrote:
> >
> > Daan, the project maintainers should enforce that. I also posted another
> > finding that the upgrade path are not identical due to the order in which
> > upgrade files are executed, see (https://github.com/apache/
> > cloudstack/pull/1768)
>
> If you mean refuse PRs containing non-idem-potent sql code yes, but as
> for real work it is all on a voluntary basis, that is someone must
> find it worth the time to encode it. I complete agree with a policy to
> refuse comntaining other creates and drop then as in
>
> > > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".
>
> So please feel free to speak up if you catch somebody trying to sneak
> in code like that. They have my -1
>
> --
> Daan
>



-- 
Rafael Weingärtner


Re: Handling of DB migrations on forks

2017-02-21 Thread Daan Hoogland
On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier  wrote:
>
> Daan, the project maintainers should enforce that. I also posted another
> finding that the upgrade path are not identical due to the order in which
> upgrade files are executed, see (https://github.com/apache/
> cloudstack/pull/1768)

If you mean refuse PRs containing non-idem-potent sql code yes, but as
for real work it is all on a voluntary basis, that is someone must
find it worth the time to encode it. I complete agree with a policy to
refuse comntaining other creates and drop then as in

> > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".

So please feel free to speak up if you catch somebody trying to sneak
in code like that. They have my -1

-- 
Daan


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread serbaut
Github user serbaut commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
Ok. I removed it from rsyslog since it should be safe there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread leprechau
Github user leprechau commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
We always want `compress` ... but the only time you need or want 
`delaycompress` is if you can't be sure that the program writing to the log can 
be successfully told to stop appending to that log.  In the case where you are 
certain that the writing program is going to do the right thing there is no 
need to add `delaycompress` as it just takes up extra space in already rotated 
logs until the next iteration.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread serbaut
Github user serbaut commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
Is it safe to remove delaycompress across the board, I assume it is there 
for a reason?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatchi...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1878#discussion_r102226833
  
--- Diff: 
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
 ---
@@ -2071,6 +2120,14 @@ protected StartAnswer execute(StartCommand cmd) {
 }
 }
 
+private void replaceNicsMacSequenceInBootArgs(String oldMacSequence, 
String newMacSequence, VirtualMachineTO vmSpec) {
+String bootArgs = vmSpec.getBootArgs();
+if (!StringUtils.isEmpty(bootArgs) && 
!StringUtils.isEmpty(oldMacSequence) && !StringUtils.isEmpty(newMacSequence)) {
+//Update boot args with the new nic mac addresses
--- End diff --

What about moving this comment to the method documentation?
Also, how do you feel about test cases? The method is pretty simple and it 
will not be hard to write some unit test for it.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Handling of DB migrations on forks

2017-02-21 Thread Frank Maximus
I'm also in favor of Liquibase.
Hibernate might be smart enough to add new columns,
but to be able to for example change the type of a column,
more complex data migration might be necessary.
Liquibase preconditions, combined with good identifiers
make up that upgrades will be more granular.


*Frank Maximus *
Senior Software Development Engineer
*nuage*networks.net 
p: (+32) 3 240 73 81 <+32%203%20240%2073%2081>


On Tue, Feb 21, 2017 at 10:31 AM Jeff Hair  wrote:

Something like Liquibase would be nice. Hibernate might be even better.
Even idempotent migrations would be a step in the right direction. Perhaps
reject any migration changes that aren't INSERT IGNORE, DROP IF EXISTS, etc?

I'm not sure why the DAO was originally hand-rolled. Perhaps the original
developers didn't think any ORM on the market met their needs. I would love
to leave DB migrations almost entirely behind. I believe Hibernate is smart
enough to construct dynamic migrations when fields are added and removed
from tables, right?

*Jeff Hair*
Technical Lead and Software Developer

Tel: (+354) 415 0200 <+354%20415%200200>
j...@greenqloud.com
www.greenqloud.com

On Tue, Feb 21, 2017 at 9:27 AM, Daan Hoogland 
wrote:

> Marc-Aurele, you are totally right and people agree with you but no one
> seems to give this priority
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>
>
>
>
> -Original Message-
> From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> Sent: 21 February 2017 10:04
> To: dev@cloudstack.apache.org
> Subject: Re: Handling of DB migrations on forks
>
> IMO the database changes should be idempotent as much as possible with
> "CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
> altering a table, it's more complicated to achieve that in pure SQL.
> A good call would be to integrate http://www.liquibase.org/ to manage the
> schema and changes in a more descriptive way which allows branches/merges.
>
> On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
> wrote:
>
> > Good strategy and I would make that not a warning but a fatal, as the
> > resulting ACS version will probably not work.
> >
> > On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU 
> wrote:
> > > Then you have to create your own branch forked from 4.10.0
> > >
> > > In our branch, I moved some table changes (eg ALTER TABLE, CREATE
> > > TABLE) from schema-.sql to
> > > engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > > If SQLException is throwed, then show a warning message instead
> > > upgrade interruption..
> > > By this way, the database will not be broken in the upgrade or fresh
> > > installation.
> > >
> > > -Wei
> > >
> > >
> > > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> > >
> > >> Hi all,
> > >>
> > >> Many people in the CS community maintain forks of CloudStack, and
> > >> might have implemented features or bug fixes long before they get
> > >> into
> > mainline.
> > >> I'm curious as to how people handle database migrations with their
> > forks.
> > >> To make a DB migration, the CS version must be updated. If a
> > >> developer
> > adds
> > >> a migration to their fork on say, version 4.8.5. Later, they decide
> > >> to upgrade to 4.10.0 which has their migration in the schema
> > >> upgrade to 4.10.0.
> > >>
> > >> How do people handle this? As far as I know, CS will crash on the
> > >> DB upgrade due to SQL errors. Do people just sanitize migrations
> > >> when they pull from downstream or somehting?
> > >>
> > >> Jeff
> > >>
> >
> >
> >
> > --
> > Daan
> >
>


Re: apidocs build failure

2017-02-21 Thread Will Stevens
Is there any chance we can fix the 'roles' issue with the API doc so we can
get the docs split into the 'Admin', 'Domain Admin' and 'User' again?  The
introduction of the dynamic roles broke the generation of the API docs with
the different roles and I was not able to figure out how to fix it.  Any
ideas for how to fix that?

*Will STEVENS*
Lead Developer



On Tue, Feb 21, 2017 at 3:01 AM, Daan Hoogland 
wrote:

> @Rajani @Rohit I missed this mail and fixed the apidoc on build.a.o
> yesterday. I can disable it or throw it away might we so wish
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>
>
>
>
> -Original Message-
> From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com]
> Sent: 17 February 2017 10:27
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> Thanks Rajani, I've no objections.
>
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 17 February 2017 14:07:34
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> since travis is already verifying this, I asked infra to disable this job.
>
> Infra ticket https://issues.apache.org/jira/browse/INFRA-13527
>
> Please comment on the ticket if you think otherwise.
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On February 13, 2017 at 12:29 PM, Rohit Yadav
> (rohit.ya...@shapeblue.com) wrote:
>
> Jenkins need to have jdk8 available, someone need to setup jenv on it as
> well.
>
> (The first job in Travis does apidocs/marvin/rat related checks to
> validate changes and apidocs build).
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 09 February 2017 12:21:40
> To: dev@cloudstack.apache.org
> Subject: apidocs build failure
>
> Hi all,
>
> All the apidocs builds[1] are failing after the recent java 8 change. Can
> anyone having access fix it? Or should we talk to INFRA about it?
>
> Error message:
>
> [INFO]
> -
> [ERROR] COMPILATION ERROR : [INFO]
> -
> [ERROR] javac: invalid target release: 1.8 Usage: javac use -help for a
> list of possible options
>
> [1] https://builds.apache.org/job/cloudstack-apidocs-master/
>
> Thanks
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com ( http://www.shapeblue.com )
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>


[GitHub] cloudstack issue #1955: CLOUDSTACK-8239 Add VirtIO SCSI support for KVM host...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1955
  
We are deploying this to our QA environment right now and hope to have it 
tested in a few days.  Great work @kiwiflyer and @nathanejohnson.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
@serbaut I agree with @ustcweizhou.  Please remove delaycompress and up to 
10.  I'd like to get this PR in as it is the second part of the problem 
resolution for my issue.  After that LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1957: CLOUDSTACK-9748:VPN Users search functionalit...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1957#discussion_r10411
  
--- Diff: server/src/com/cloud/network/vpn/RemoteAccessVpnManagerImpl.java 
---
@@ -621,6 +627,10 @@ public void 
doInTransactionWithoutResult(TransactionStatus status) {
 sc.setParameters("username", username);
 }
 
+if (keyword!= null) {
--- End diff --

@ustcweizhou : My bad. Amended the changes . Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1957: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@ustcweizhou : My bad. Amended the changes . Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Handling of DB migrations on forks

2017-02-21 Thread Marc-Aurèle Brothier
Jeff, I do wonder the same thing about the ORM... I hit the ORM limitation
in many places now without being able to do joins on the same table
specifically for capacity check, and you can see in the code many hand made
SQL query on that part. I think the views came into the pictures for the
same reason.

Daan, the project maintainers should enforce that. I also posted another
finding that the upgrade path are not identical due to the order in which
upgrade files are executed, see (https://github.com/apache/
cloudstack/pull/1768)

On Tue, Feb 21, 2017 at 10:31 AM, Jeff Hair  wrote:

> Something like Liquibase would be nice. Hibernate might be even better.
> Even idempotent migrations would be a step in the right direction. Perhaps
> reject any migration changes that aren't INSERT IGNORE, DROP IF EXISTS,
> etc?
>
> I'm not sure why the DAO was originally hand-rolled. Perhaps the original
> developers didn't think any ORM on the market met their needs. I would love
> to leave DB migrations almost entirely behind. I believe Hibernate is smart
> enough to construct dynamic migrations when fields are added and removed
> from tables, right?
>
> *Jeff Hair*
> Technical Lead and Software Developer
>
> Tel: (+354) 415 0200
> j...@greenqloud.com
> www.greenqloud.com
>
> On Tue, Feb 21, 2017 at 9:27 AM, Daan Hoogland <
> daan.hoogl...@shapeblue.com>
> wrote:
>
> > Marc-Aurele, you are totally right and people agree with you but no one
> > seems to give this priority
> >
> > daan.hoogl...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> > @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> > Sent: 21 February 2017 10:04
> > To: dev@cloudstack.apache.org
> > Subject: Re: Handling of DB migrations on forks
> >
> > IMO the database changes should be idempotent as much as possible with
> > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
> > altering a table, it's more complicated to achieve that in pure SQL.
> > A good call would be to integrate http://www.liquibase.org/ to manage
> the
> > schema and changes in a more descriptive way which allows
> branches/merges.
> >
> > On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
> > wrote:
> >
> > > Good strategy and I would make that not a warning but a fatal, as the
> > > resulting ACS version will probably not work.
> > >
> > > On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU 
> > wrote:
> > > > Then you have to create your own branch forked from 4.10.0
> > > >
> > > > In our branch, I moved some table changes (eg ALTER TABLE, CREATE
> > > > TABLE) from schema-.sql to
> > > > engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > > > If SQLException is throwed, then show a warning message instead
> > > > upgrade interruption..
> > > > By this way, the database will not be broken in the upgrade or fresh
> > > > installation.
> > > >
> > > > -Wei
> > > >
> > > >
> > > > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> > > >
> > > >> Hi all,
> > > >>
> > > >> Many people in the CS community maintain forks of CloudStack, and
> > > >> might have implemented features or bug fixes long before they get
> > > >> into
> > > mainline.
> > > >> I'm curious as to how people handle database migrations with their
> > > forks.
> > > >> To make a DB migration, the CS version must be updated. If a
> > > >> developer
> > > adds
> > > >> a migration to their fork on say, version 4.8.5. Later, they decide
> > > >> to upgrade to 4.10.0 which has their migration in the schema
> > > >> upgrade to 4.10.0.
> > > >>
> > > >> How do people handle this? As far as I know, CS will crash on the
> > > >> DB upgrade due to SQL errors. Do people just sanitize migrations
> > > >> when they pull from downstream or somehting?
> > > >>
> > > >> Jeff
> > > >>
> > >
> > >
> > >
> > > --
> > > Daan
> > >
> >
>


[GitHub] cloudstack issue #1768: CLOUDSTACK 9601: Upgrade: change logic for update pa...

2017-02-21 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1768
  
@rhtyd Changing the sequence is only idempotent if you have been upgrading 
to each new versions step by step. If you jumped other versions, then the path 
is different for each original version. This fix wants to avoid such a 
difference.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [QUESTION] Upgrade path to JDK8

2017-02-21 Thread Ron Wheeler


http://stackoverflow.com/questions/10895969/can-newer-jre-versions-run-java-programs-compiled-with-older-jdk-versions
You can run code compiled by the java 1.7 or 1.6 or earlier SDKs on a 
Java 8 JVM.


This gets you the improved speed of the Java 8 JVM even if you do not 
rebuild the code.


If this was not true, life would be chaos when you upgraded your Java on 
a production server.

All of the code that ran a few minutes ago would fail.

Think about how much java is running on a typical data centre. You would 
have heard the howls of pain if all that code suddenly stopped running.


It should be easy to test the existing jars compiled with earlier 
version of java on a machine running the Java 8 JVM.

Just replace the Java and restart the server.

A reasonable migration path is to replace the JVM and continue to run 
existing code.

Upgrade the code at your leisure.

An application can be constructed from Jars from different SDKs.
I had no trouble with the dozens of Apache and third party libraries 
that make up my application when I changed my compiler to Java 8.
One minute I was compiling and testing with Java 7 and the next minute I 
was compiling with Java 8 and the code still worked with all of the same 
third party jars.


No source code changes where required in any code to upgrade.
Since then, I have incorporated  Java 8 features into most of our code 
but that is not really part of this discussion.


I hope that this helps.

Ron


Ron

On 21/02/2017 3:03 AM, Marc-Aurèle Brothier wrote:

No there isn't any issue except having the bugs & fixes of the JDK you're
using. You can compile it with a JDK 1.8 as long as you don't change the
target bytecode version for 1.7

On Tue, Feb 21, 2017 at 8:15 AM, Wei ZHOU  wrote:


Marco,

Good point. Is there any issue if we compile code with jdk8 but run it on
jdk7 (systemvm) ?

-Wei

2017-02-21 7:43 GMT+01:00 Marc-Aurèle Brothier :


There's a list of compatibility issues between Java 7 & Java 8 here
http://www.oracle.com/technetwork/java/javase/8-
compatibility-guide-2156366.
html

The main problem I would see in two system communicating while running
different Java version is the way they handle serialization and
de-serialization of objects which had been a problem in the past between
some Java versions. AFAIK we're using JSON for that now, so if the code
already compiles with Java8, it should not be a problem.

On Mon, Feb 20, 2017 at 10:36 PM, Wei ZHOU 

wrote:

We tested 4.7.1+systemd patches as well, it also works fine.

-Wei

2017-02-20 22:34 GMT+01:00 Wei ZHOU :


@Will and @Syed, I build the packages of 4.9.2+systemd patches on

ubuntu

16.04 (openjdk 8).
Then install the packages to management server and kvm hosts (all are
ubuntu 16.04 with openjdk8).
The systemvm template is 4.6 with openjdk7.

cpvm and ssvm work fine.

As there is no java process in VR, so I did not check, VR should not

be

impacted.

-Wei

2017-02-20 22:16 GMT+01:00 Pierre-Luc Dion :


That's quite interesting Chiradeep!

so I could do something like this I guest:

mvn clean install

and then this one to build the systemvm.iso:
mvn -Psystemvm -source 1.7 -target 1.7 install


I'll give it a try! but for now, I'm worried about existing VR, they

must

continue to work while running on jdk7.  newer VPC would be ok to

run

with

jdk8.  but we need to make sure while upgrading the

management-server

we

are not in the obligation to upgrade VR's.

For sure it is required for strongswan + JDK8 to ugprade the VR,

but a

  existing VR should remain usable for port forwarding, vm creation

and

such...

I'll post my finding...

Thanks !



On Mon, Feb 20, 2017 at 3:59 PM, Chiradeep Vittal <

chirade...@gmail.com

wrote:


You can build the system vm with  -source 1.7 -target 1.7
Also unless you are using Java8 features (lambda) the classfiles

produced

by javac 8 should work in a 1.7 JVM

Sent from my iPhone


On Feb 20, 2017, at 11:51 AM, Will Stevens <

wstev...@cloudops.com

wrote:

yes, that is what I was expecting.  which is why I was asking

about

Wei's

setup because he seems to have worked around that problem.  Or

he

has

a

custom SystemVM template running with both JDK7 and JDK8.

*Will STEVENS*
Lead Developer




On Mon, Feb 20, 2017 at 2:20 PM, Syed Ahmed <

sah...@cloudops.com

wrote:

The problem is that systemvm.iso is built with java 8 whereas

java

on

the

VR is java 7

On Mon, Feb 20, 2017 at 13:20 Will Stevens <

wstev...@cloudops.com

wrote:

Did it work after resetting a VPC or when blowing away the

SSVM

or

CPVM?  I

would not expect the SSVM or the CPVM to come up if the

management

server

was built with JDK8 and the system vm template is only using

JDK7.

Can

you

confirm?​

*Will STEVENS*
Lead Developer




On Mon, Feb 20, 2017 at 1:15 PM, Wei ZHOU <

ustcweiz...@gmail.com

wrote:

We've 

[GitHub] cloudstack issue #1773: CLOUDSTACK-9607: Preventing template deletion when t...

2017-02-21 Thread jburwell
Github user jburwell commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
@priyankparihar I agree with @ustcweizhou regarding the default value of 
`forced` in terms of backwards compatibility.

Also, why we permit deletion of a template when it is associated with one 
or more active volumes?  It seems like we are giving the user the means to 
corrupt their system.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >