[GitHub] cloudstack issue #1797: CLOUDSTACK-9630: Cannot use listNics API as advertis...

2017-03-28 Thread PranaliM
Github user PranaliM commented on the issue:

https://github.com/apache/cloudstack/pull/1797
  
Test LGTM based on manual testing of the fix:

**Before Fix:**

d82b2278-ca19-46da-b532-0a044e778bb8
71424164-015c-4163-9dde-74019fb22ce2
255.255.255.0
10.1.1.1
10.1.1.119
Guest
true
02:00:1c:fe:00:01
0
71a0be44-69e4-4821-b8d9-e579ed04d52a


**After Fix:**

d82b2278-ca19-46da-b532-0a044e778bb8
71424164-015c-4163-9dde-74019fb22ce2
255.255.255.0
10.1.1.1
10.1.1.119
Guest
**Isolated**
true
02:00:1c:fe:00:01
0
71a0be44-69e4-4821-b8d9-e579ed04d52a



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Survey “Building an autonomic cloud for you”

2017-03-28 Thread Lucas Berri Cristofolini
Hey guys,

Once again I'd like to thank everyone who has answered the survey so far!

Now, just in case you are wondering what this Autonomiccs thing is all
about, we've prepared a small video demonstration of what we are working
on. It's a bit rough around the edges but it manages to showcase what is
currently available through our platform:
https://www.youtube.com/watch?v=dj5DO4Dcf98

More data is always welcome, and you can take our short survey right here:
https://goo.gl/forms/vvu0665FfKujn6gP2 ;)

Cheers,
Lucas

On Mon, Mar 20, 2017 at 7:35 PM, Lucas Berri Cristofolini <
lucascristofol...@gmail.com> wrote:

> Dear CloudStack community,
>
> I'd like to thank those of you who've answered the survey so far.
> However, we need more data; the questionnaire should take no more than 3
> minutes to fill out and is available here: https://goo.gl/forms/vvu0665Ff
> Kujn6gP2.  All of the answers are confidential (no company will be
> identified in none of our analysis). These data are crucial to focus our
> development efforts on features that attend your needs.
>
>
> To give you a taste of what our platform can do, our agents can,
> autonomously, execute virtual machines balancing or consolidation of your
> cloud system. For instance, according to our experiments, it is possible to
> save up to 30% of energy when a consolidation management model is applied
> constantly in a cloud. Also, if you want to know about our virtual machines
> balancing experiments, do not hesitate to contact us!
>
>
> Thanks for your time!
> Lucas
>
> On Fri, Mar 10, 2017 at 2:03 PM, Gabriel Beims Bräscher <
> gabrasc...@gmail.com> wrote:
>
>> Dear Apache CloudStack community,
>>
>>
>> Without wishing to take the liberty, I would like to ask a bit of your
>> precious time to answer a survey. This survey will help us to provide an
>> autonomic cloud for you.
>>
>> Autonomiccs is a Brazilian Startup committed to build solutions that
>> autonomically manage and optimize cloud computing infrastructures created
>> with Apache CloudStack. Some of you may have seen a bit of our work at
>> ApacheCon Europe 2016. We hope to work alongside the Apache CloudStack
>> community, leveraging Apache CloudStack to the next level on cloud
>> computing orchestration.
>>
>> We are working hard to understand the needs of CloudStack users. This
>> survey can help us to provide Apache CloudStack users with the cloud they
>> need, delivering a competitive advantage for the Apache CloudStack
>> ecosystem.
>>
>> If all goes right, we will present the collected data at the CloudStack
>> Collaboration Conference in Miami, within the “*Building an Autonomic
>> CloudStack*” presentation.
>>
>>
>> Survey link:
>> https://docs.google.com/forms/d/e/1FAIpQLSfBhG1KNo_N_nwOBt_1
>> 8GbHK56_DTch1aEMkv4wAJnTDpE2ow/viewform
>>
>>
>> Thanks in advance for your time,
>>
>> Gabriel.
>>
>
>


[GitHub] cloudstack pull request #1813: CLOUDSTACK-9604: Root disk resize support for...

2017-03-28 Thread serg38
Github user serg38 commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1813#discussion_r108580807
  
--- Diff: test/integration/smoke/test_deploy_vm_root_resize.py ---
@@ -114,36 +134,46 @@ def test_00_deploy_vm_root_resize(self):
 # 2. root disk has new size per listVolumes
 # 3. Rejects non-supported hypervisor types
 """
-if(self.hypervisor.lower() == 'kvm'):
-newrootsize = (self.template.size >> 30) + 2
-self.virtual_machine = VirtualMachine.create(
-self.apiclient,
-self.testdata["virtual_machine"],
-accountid=self.account.name,
-zoneid=self.zone.id,
-domainid=self.account.domainid,
-serviceofferingid=self.service_offering.id,
-templateid=self.template.id,
-rootdisksize=newrootsize
+
+
+newrootsize = (self.template.size >> 30) + 2
+if(self.hypervisor.lower() == 'kvm' or self.hypervisor.lower() ==
+'xenserver'or self.hypervisor.lower() == 'vmware'  ):
+
+if self.hypervisor=="vmware":
+self.virtual_machine = VirtualMachine.create(
+self.apiclient, self.services["virtual_machine"],
+zoneid=self.zone.id,
+accountid=self.account.name,
+domainid=self.domain.id,
+serviceofferingid=self.services_offering_vmware.id,
+templateid=self.template.id
+)
+
--- End diff --

B.O. tests are failing because for vmware you don't specify 
rootdisksize=newrootsize. You probably better to remove if-else entirely.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


RE: [VOTE] Retirement of midonet plugin

2017-03-28 Thread Marty Godsey
+1 to retire.

Regards,
Marty Godsey
Principal Engineer
nSource Solutions, LLC

-Original Message-
From: Rafael Weingärtner [mailto:rafaelweingart...@gmail.com] 
Sent: Tuesday, March 28, 2017 4:46 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [VOTE] Retirement of midonet plugin

Dear ACS fellows,
We have discussed the retirement of Midonet plugin [*]. After quite some talk, 
we converged in a retirement process and it seems that we all agree that the 
Midonet plugin should be retired. So, to formalize things, we should vote 
Midonet retirement.

All users and devs are welcome to vote here:
[+1] I *do want to retire *the Midonet plugin [0] Whatever happens I am happy 
[-1] I *do not want to retire* the Midonet plugin


[*] http://markmail.org/message/x6p3gnvqbbxcj6gs

--
Rafael Weingärtner


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Trillian test result (tid-965)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 43960 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1813-t965-vmware-55u3.zip
Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vm_root_resize.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_test_vm_volume_snapshot | `Failure` | 316.73 | test_vm_snapshots.py
test_04_rvpc_privategw_static_routes | `Failure` | 846.71 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 121.36 | 
test_privategw_acl.py
test_02_deploy_vm_root_resize | `Failure` | 65.56 | 
test_deploy_vm_root_resize.py
test_01_deploy_vm_root_resize | `Failure` | 40.42 | 
test_deploy_vm_root_resize.py
test_00_deploy_vm_root_resize | `Failure` | 211.19 | 
test_deploy_vm_root_resize.py
test_01_vpc_site2site_vpn | Success | 350.38 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 151.17 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 556.72 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 366.84 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 679.38 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 645.41 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1551.81 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 677.85 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 692.05 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1352.22 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.68 | test_volumes.py
test_06_download_detached_volume | Success | 60.41 | test_volumes.py
test_05_detach_volume | Success | 105.22 | test_volumes.py
test_04_delete_attached_volume | Success | 15.16 | test_volumes.py
test_03_download_attached_volume | Success | 15.19 | test_volumes.py
test_02_attach_volume | Success | 53.88 | test_volumes.py
test_01_create_volume | Success | 450.07 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 448.48 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.18 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 228.99 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.60 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 282.04 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.70 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.19 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 60.90 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.07 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.11 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.11 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 206.09 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 15.18 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.14 | test_templates.py
test_01_create_template | Success | 110.63 | test_templates.py
test_10_destroy_cpvm | Success | 266.59 | test_ssvm.py
test_09_destroy_ssvm | Success | 238.22 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.26 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.20 | test_ssvm.py
test_06_stop_cpvm | Success | 171.43 | test_ssvm.py
test_05_stop_ssvm | Success | 208.71 | test_ssvm.py
test_04_cpvm_internals | Success | 0.96 | test_ssvm.py
test_03_ssvm_internals | Success | 3.13 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_02_list_snapshots_with_removed_data_store | Success | 172.00 | 
test_snapshots.py

[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Trillian test result (tid-964)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 42693 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1813-t964-xenserver-65sp1.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 500.62 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1346.02 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 532.74 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 719.05 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 316.10 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 136.58 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 542.83 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 324.82 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 668.07 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 873.93 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1072.21 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.78 | test_volumes.py
test_08_resize_volume | Success | 100.92 | test_volumes.py
test_07_resize_fail | Success | 121.04 | test_volumes.py
test_06_download_detached_volume | Success | 25.33 | test_volumes.py
test_05_detach_volume | Success | 100.28 | test_volumes.py
test_04_delete_attached_volume | Success | 10.19 | test_volumes.py
test_03_download_attached_volume | Success | 15.30 | test_volumes.py
test_02_attach_volume | Success | 15.73 | test_volumes.py
test_01_create_volume | Success | 397.46 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 374.28 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 280.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 186.28 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 133.68 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 177.15 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.72 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 61.06 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.15 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 20.22 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.26 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.27 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 80.69 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.10 | test_templates.py
test_02_edit_template | Success | 90.08 | test_templates.py
test_01_create_template | Success | 55.51 | test_templates.py
test_10_destroy_cpvm | Success | 226.72 | test_ssvm.py
test_09_destroy_ssvm | Success | 208.94 | test_ssvm.py
test_08_reboot_cpvm | Success | 356.88 | test_ssvm.py
test_07_reboot_ssvm | Success | 178.89 | test_ssvm.py
test_06_stop_cpvm | Success | 166.71 | test_ssvm.py
test_05_stop_ssvm | Success | 168.94 | test_ssvm.py
test_04_cpvm_internals | Success | 1.14 | test_ssvm.py
test_03_ssvm_internals | Success | 3.36 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_02_list_snapshots_with_removed_data_store | Success | 105.15 | 
test_snapshots.py
test_01_snapshot_root_disk | Success | 26.38 | test_snapshots.py
test_04_change_offering_small | Success | 121.04 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.24 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | 

[VOTE] Retirement of midonet plugin

2017-03-28 Thread Rafael Weingärtner
Dear ACS fellows,
We have discussed the retirement of Midonet plugin [*]. After quite some
talk, we converged in a retirement process and it seems that we all agree
that the Midonet plugin should be retired. So, to formalize things, we
should vote Midonet retirement.

All users and devs are welcome to vote here:
[+1] I *do want to retire *the Midonet plugin
[0] Whatever happens I am happy
[-1] I *do not want to retire* the Midonet plugin


[*] http://markmail.org/message/x6p3gnvqbbxcj6gs

--
Rafael Weingärtner


Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Rafael Weingärtner
You right, it is a tricky ;)

Well, it seems that we have discussed and formalized the retirement
process. We also discussed the retirement of Midonet plugin, and it seems
that we have a consensus here. To proceed with Midonet retirement I will
create a voting thread; so, we follow our brand new process.

See you all on the voting thread.

On Tue, Mar 28, 2017 at 10:30 AM, Daan Hoogland  wrote:

> Yes, well you write a text to answer questions before they happen, right.
> Making a q section is a poetic trick that can work well in these kinds of
> texts though ;) but as you said, lets improve as questions come in.
>
> This is only literary comment which is always up for argument, your
> content is right and to the point.
>
> On 28/03/17 16:20, "Rafael Weingärtner" 
> wrote:
>
> Thanks for the feedback Daan. I applied the changes as you suggested.
>
> I only created a Q section in advance because I was pretty sure
> someone
> might come with those questions. We can improve it on the fly, when
> other
> repetitive questions start to pop up.
>
>
>
> On Tue, Mar 28, 2017 at 10:10 AM, Daan Hoogland <
> daan.hoogl...@shapeblue.com
> > wrote:
>
> > Rafael, generally good but I don’t see why the question “why do we
> retire
> > … (process implied)” is part of the q section. I would make it
> part of an
> > introduction and put it directly after the first sentence, before
> the rest
> > of that paragraph.
> >
> > Arguably a q section only makes sense after a while when questions
> came
> > forward but I am fine with the other questions as is.
> >
> > On 28/03/17 16:00, "Rafael Weingärtner"  >
> > wrote:
> >
> > I created a page describing the retirement process (it is
> general, not
> > Midonet specific). Could I get some feedback from you guys?
> >
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/
> > Plugin+retirement+process
> >
> >
> > Then, I will create a proper voting thread and proceed with the
> steps
> > described in our wiki.
> >
> > On Tue, Mar 28, 2017 at 9:56 AM, Gabriel Beims Bräscher <
> > gabrasc...@gmail.com> wrote:
> >
> > > +1 on retiring midonet.
> > >
> > > 2017-03-28 10:50 GMT-03:00 Syed Ahmed :
> > >
> > > > +1 That plugin needs to go :)
> > > >
> > > > On Mon, Mar 27, 2017 at 4:36 PM, Erik Weber <
> terbol...@gmail.com>
> > wrote:
> > > > > Sounds good :-)
> > > > >
> > > > >
> > > > > Erik
> > > > >
> > > > > man. 27. mar. 2017 kl. 18.03 skrev Will Stevens <
> > wstev...@cloudops.com
> > > >:
> > > > >
> > > > >> I think we are planning to do something like "at least 6
> months"
> > > > because of
> > > > >> the irregularity of releases.  This gives us a date (from
> when
> > the
> > > > >> announcement was release becomes available) till the PR to
> > remove gets
> > > > >> merged.  That PR will then be included in the next release
> > whenever it
> > > > is.
> > > > >> So if the "time" is 6 months, it could actually be closer
> to 9
> > months
> > > > >> before it actually gets removes since the release may not
> be
> > ready to
> > > be
> > > > >> cut at 6 months.
> > > > >>
> > > > >> Does this make sense?  It gives us a way to have a date
> alert
> > when a
> > > PR
> > > > >> should be merged rather than trying to track which
> releases each
> > > > >> decommissioned item is targeted for, which could mess up
> timing
> > if
> > > > there is
> > > > >> some long release cycles as well as short ones...
> > > > >>
> > > > >> *Will STEVENS*
> > > > >> Lead Developer
> > > > >>
> > > > >> 
> > > > >>
> > > > >> On Mon, Mar 27, 2017 at 9:46 AM, Daan Hoogland <
> > > > >> daan.hoogl...@shapeblue.com>
> > > > >> wrote:
> > > > >>
> > > > >> > I am against that
> > > > >> > Strain on the community is not in releases but in time.
> We
> > already
> > > > >> > guarantee it is at least one minor
> > > > >> >
> > > > >> > On 27/03/17 15:24, "Erik Weber" 
> wrote:
> > > > >> >
> > > > >> > Personally I would be in favour of using releases
> rather
> > than
> > > > months
> > > > >> > as time unit.
> > > > >> > Our release schedule is very unpredictable, and
> it's hard
> > to
> > > > foresee
> > > > >> > how many releases we've rolled out in 6 months.
> > > > >> >
> > > > >> > 

[GitHub] cloudstack issue #1278: CLOUDSTACK-9198: Virtual router gets deployed in dis...

2017-03-28 Thread rafaelweingartner
Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1278
  
@anshul1886, this pointing finger thing is not good.

I do not know why people did not do the work as it should have been done 
before. I was probably not around when that was done. I only asked you to 
remove those variables because you were touching the code in which they are 
found. It is not only with you, every time I review a code and there is room 
for improvements, I always suggest it. I also measure my suggestions, I will 
never ask something huge; normally I ask/suggest for small and concise 
improvements such as the removal of unused variables/blocks of codes.

I was probably present in most of the PRs created by @nvazquez, you can see 
how this type of discussion improved greatly all of the code he had already 
worked on.

If you do not want to remove something that is not being used is fine. 
However, I would like a clarification. If the variables you are changing are 
not used (as you finally admitted), then how can changing them solve the 
problem you reported on CLOUDSTACK-9198?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1859: CLOUDSTACK-8672 : NCC Integration with CloudStack

2017-03-28 Thread rafaelweingartner
Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1859
  
Folks, what about a middle ground here?

I was checking the commits. For instance, all of the commits 
"Added/implemented XXX ." could be all squashed by the same author. There are a 
bunch of commits in this style that are introducing a single class. Also, 
subsequent commits that change the introduced classes by the same author can 
also be squashed. Therefore, no one loses merit and the history is maintained.

After the squashing process is done, we can evaluate and discuss the 
situation further. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #837: CLOUDSTACK-8855 Improve Error Message for Host...

2017-03-28 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r108469206
  
--- Diff: server/src/com/cloud/alert/AlertManagerImpl.java ---
@@ -767,7 +767,9 @@ public void sendAlert(AlertType alertType, long 
dataCenterId, Long podId, Long c
 // set up a new alert
 AlertVO newAlert = new AlertVO();
 newAlert.setType(alertType.getType());
-newAlert.setSubject(subject);
+//do not have a seperate column for content.
+//appending the message to the subject for now.
+newAlert.setSubject(subject+content);
--- End diff --

I agree with you regarding the time of contributor. I also find it great 
that you documented this and opened a Jira ticket. However, for this specific 
case, I am really not comfortable with the change as it is. As I said before, 
the code at line 772 is opening the gates for unexpected runtime exceptions 
(A.K.A. bugs). If others are willing to take the risk of merging and then later 
dealing with the consequences, I cannot do anything against it. I am only 
pointing at the problem and making it quite clear what I think.

I really do not see any trouble to do things the right way here. It is only 
a matter of creating an alter table SQL that adds a field to a table. Then, you 
have to create this new field in `AlertVO`, and use it; as simple as that. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1918: Management Server UI (VM statistics page) CPU...

2017-03-28 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1918#discussion_r108463719
  
--- Diff: server/src/com/cloud/api/query/dao/UserVmJoinDaoImpl.java ---
@@ -196,6 +196,7 @@ public UserVmResponse newUserVmResponse(ResponseView 
view, String objectName, Us
 // stats calculation
 VmStats vmStats = ApiDBUtils.getVmStatistics(userVm.getId());
 if (vmStats != null) {
+
--- End diff --

I think you can remove this extra line here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #2021: CLOUDSTACK-9854: Fix test_primary_storage tes...

2017-03-28 Thread nvazquez
GitHub user nvazquez opened a pull request:

https://github.com/apache/cloudstack/pull/2021

CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migration

Fix for test_primary_storage integration tests on simulator.

When finding storage pool migration options for volume on running vm, API 
returns None as hypervisor doesn't support live migration.


2017-03-28 06:07:55,958 - DEBUG - Sending GET Cmd : 
findStoragePoolsForMigration===
2017-03-28 06:07:55,977 - DEBUG - Response : None
2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: 
test_03_migration_options_storage_tags: ['Traceback (most recent call 
last):\n', '  File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 
329, in run\ntestMethod()\n', '  File 
"/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py",
 line 30, in test_wrapper\nreturn test(self, *args, **kwargs)\n', '  File 
"/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py",
 line 547, in test_03_migration_options_storage_tags\npools_suitable = 
filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 
'NoneType' object is not iterable\n"]


So we simply stop vm before sending findStoragePoolsForMigration command

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nvazquez/cloudstack CLOUDSTACK-9854

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2021.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2021


commit e313dafea46cf281bf09cc66cfcaf6a38d53ca90
Author: nvazquez 
Date:   2017-03-28T14:35:55Z

CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migration




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Daan Hoogland
Yes, well you write a text to answer questions before they happen, right. 
Making a q section is a poetic trick that can work well in these kinds of 
texts though ;) but as you said, lets improve as questions come in.

This is only literary comment which is always up for argument, your content is 
right and to the point.
 
On 28/03/17 16:20, "Rafael Weingärtner"  wrote:

Thanks for the feedback Daan. I applied the changes as you suggested.

I only created a Q section in advance because I was pretty sure someone
might come with those questions. We can improve it on the fly, when other
repetitive questions start to pop up.



On Tue, Mar 28, 2017 at 10:10 AM, Daan Hoogland  wrote:

> Rafael, generally good but I don’t see why the question “why do we retire
> … (process implied)” is part of the q section. I would make it part of 
an
> introduction and put it directly after the first sentence, before the rest
> of that paragraph.
>
> Arguably a q section only makes sense after a while when questions came
> forward but I am fine with the other questions as is.
>
> On 28/03/17 16:00, "Rafael Weingärtner" 
> wrote:
>
> I created a page describing the retirement process (it is general, not
> Midonet specific). Could I get some feedback from you guys?
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/
> Plugin+retirement+process
>
>
> Then, I will create a proper voting thread and proceed with the steps
> described in our wiki.
>
> On Tue, Mar 28, 2017 at 9:56 AM, Gabriel Beims Bräscher <
> gabrasc...@gmail.com> wrote:
>
> > +1 on retiring midonet.
> >
> > 2017-03-28 10:50 GMT-03:00 Syed Ahmed :
> >
> > > +1 That plugin needs to go :)
> > >
> > > On Mon, Mar 27, 2017 at 4:36 PM, Erik Weber 
> wrote:
> > > > Sounds good :-)
> > > >
> > > >
> > > > Erik
> > > >
> > > > man. 27. mar. 2017 kl. 18.03 skrev Will Stevens <
> wstev...@cloudops.com
> > >:
> > > >
> > > >> I think we are planning to do something like "at least 6 
months"
> > > because of
> > > >> the irregularity of releases.  This gives us a date (from when
> the
> > > >> announcement was release becomes available) till the PR to
> remove gets
> > > >> merged.  That PR will then be included in the next release
> whenever it
> > > is.
> > > >> So if the "time" is 6 months, it could actually be closer to 9
> months
> > > >> before it actually gets removes since the release may not be
> ready to
> > be
> > > >> cut at 6 months.
> > > >>
> > > >> Does this make sense?  It gives us a way to have a date alert
> when a
> > PR
> > > >> should be merged rather than trying to track which releases 
each
> > > >> decommissioned item is targeted for, which could mess up timing
> if
> > > there is
> > > >> some long release cycles as well as short ones...
> > > >>
> > > >> *Will STEVENS*
> > > >> Lead Developer
> > > >>
> > > >> 
> > > >>
> > > >> On Mon, Mar 27, 2017 at 9:46 AM, Daan Hoogland <
> > > >> daan.hoogl...@shapeblue.com>
> > > >> wrote:
> > > >>
> > > >> > I am against that
> > > >> > Strain on the community is not in releases but in time. We
> already
> > > >> > guarantee it is at least one minor
> > > >> >
> > > >> > On 27/03/17 15:24, "Erik Weber"  wrote:
> > > >> >
> > > >> > Personally I would be in favour of using releases rather
> than
> > > months
> > > >> > as time unit.
> > > >> > Our release schedule is very unpredictable, and it's hard
> to
> > > foresee
> > > >> > how many releases we've rolled out in 6 months.
> > > >> >
> > > >> > Deprecate in the next (4.11?), remove a few releases 
later
> > > (4.13?).
> > > >> >
> > > >> > --
> > > >> > Erik
> > > >> >
> > > >> > On Sat, Mar 18, 2017 at 11:23 PM, Rafael Weingärtner
> > > >> >  wrote:
> > > >> > > Sorry the delay guys, I have been swapped these last
> days.
> > > >> > >
> > > >> > > In summary, everybody that spoke is in favor of the
> plugin
> > > >> > retirement. I am
> > > >> > > assuming that people who did not present their opinion
> agree
> > > with
> > 

Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Sergey Levitskiy
Looks good to me. 



Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Rafael Weingärtner
Thanks for the feedback Daan. I applied the changes as you suggested.

I only created a Q section in advance because I was pretty sure someone
might come with those questions. We can improve it on the fly, when other
repetitive questions start to pop up.



On Tue, Mar 28, 2017 at 10:10 AM, Daan Hoogland  wrote:

> Rafael, generally good but I don’t see why the question “why do we retire
> … (process implied)” is part of the q section. I would make it part of an
> introduction and put it directly after the first sentence, before the rest
> of that paragraph.
>
> Arguably a q section only makes sense after a while when questions came
> forward but I am fine with the other questions as is.
>
> On 28/03/17 16:00, "Rafael Weingärtner" 
> wrote:
>
> I created a page describing the retirement process (it is general, not
> Midonet specific). Could I get some feedback from you guys?
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/
> Plugin+retirement+process
>
>
> Then, I will create a proper voting thread and proceed with the steps
> described in our wiki.
>
> On Tue, Mar 28, 2017 at 9:56 AM, Gabriel Beims Bräscher <
> gabrasc...@gmail.com> wrote:
>
> > +1 on retiring midonet.
> >
> > 2017-03-28 10:50 GMT-03:00 Syed Ahmed :
> >
> > > +1 That plugin needs to go :)
> > >
> > > On Mon, Mar 27, 2017 at 4:36 PM, Erik Weber 
> wrote:
> > > > Sounds good :-)
> > > >
> > > >
> > > > Erik
> > > >
> > > > man. 27. mar. 2017 kl. 18.03 skrev Will Stevens <
> wstev...@cloudops.com
> > >:
> > > >
> > > >> I think we are planning to do something like "at least 6 months"
> > > because of
> > > >> the irregularity of releases.  This gives us a date (from when
> the
> > > >> announcement was release becomes available) till the PR to
> remove gets
> > > >> merged.  That PR will then be included in the next release
> whenever it
> > > is.
> > > >> So if the "time" is 6 months, it could actually be closer to 9
> months
> > > >> before it actually gets removes since the release may not be
> ready to
> > be
> > > >> cut at 6 months.
> > > >>
> > > >> Does this make sense?  It gives us a way to have a date alert
> when a
> > PR
> > > >> should be merged rather than trying to track which releases each
> > > >> decommissioned item is targeted for, which could mess up timing
> if
> > > there is
> > > >> some long release cycles as well as short ones...
> > > >>
> > > >> *Will STEVENS*
> > > >> Lead Developer
> > > >>
> > > >> 
> > > >>
> > > >> On Mon, Mar 27, 2017 at 9:46 AM, Daan Hoogland <
> > > >> daan.hoogl...@shapeblue.com>
> > > >> wrote:
> > > >>
> > > >> > I am against that
> > > >> > Strain on the community is not in releases but in time. We
> already
> > > >> > guarantee it is at least one minor
> > > >> >
> > > >> > On 27/03/17 15:24, "Erik Weber"  wrote:
> > > >> >
> > > >> > Personally I would be in favour of using releases rather
> than
> > > months
> > > >> > as time unit.
> > > >> > Our release schedule is very unpredictable, and it's hard
> to
> > > foresee
> > > >> > how many releases we've rolled out in 6 months.
> > > >> >
> > > >> > Deprecate in the next (4.11?), remove a few releases later
> > > (4.13?).
> > > >> >
> > > >> > --
> > > >> > Erik
> > > >> >
> > > >> > On Sat, Mar 18, 2017 at 11:23 PM, Rafael Weingärtner
> > > >> >  wrote:
> > > >> > > Sorry the delay guys, I have been swapped these last
> days.
> > > >> > >
> > > >> > > In summary, everybody that spoke is in favor of the
> plugin
> > > >> > retirement. I am
> > > >> > > assuming that people who did not present their opinion
> agree
> > > with
> > > >> > the ones
> > > >> > > presented here.
> > > >> > >
> > > >> > > The process to retire this plugin would be the
> following:
> > > >> > >
> > > >> > >1. Announce in our mailing lists the road map of
> > retirement,
> > > a
> > > >> > data for
> > > >> > >the final removal should be defined and presented in
> this
> > > road
> > > >> > map;
> > > >> > >2. Create a Jira ticket to execute the plugin
> disabling (is
> > > this
> > > >> > >expression right?!), and of course, a PR to disable
> the
> > build
> > > >> > until final
> > > >> > >deletion;
> > > >> > >3. Create a Jira ticket to execute the final removal
> of the
> > > >> > plugin. The
> > > >> > >removal should only happen when the defined date
> comes by;
> > 

Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Daan Hoogland
Rafael, generally good but I don’t see why the question “why do we retire … 
(process implied)” is part of the q section. I would make it part of an 
introduction and put it directly after the first sentence, before the rest of 
that paragraph.

Arguably a q section only makes sense after a while when questions came 
forward but I am fine with the other questions as is.

On 28/03/17 16:00, "Rafael Weingärtner"  wrote:

I created a page describing the retirement process (it is general, not
Midonet specific). Could I get some feedback from you guys?


https://cwiki.apache.org/confluence/display/CLOUDSTACK/Plugin+retirement+process


Then, I will create a proper voting thread and proceed with the steps
described in our wiki.

On Tue, Mar 28, 2017 at 9:56 AM, Gabriel Beims Bräscher <
gabrasc...@gmail.com> wrote:

> +1 on retiring midonet.
>
> 2017-03-28 10:50 GMT-03:00 Syed Ahmed :
>
> > +1 That plugin needs to go :)
> >
> > On Mon, Mar 27, 2017 at 4:36 PM, Erik Weber  wrote:
> > > Sounds good :-)
> > >
> > >
> > > Erik
> > >
> > > man. 27. mar. 2017 kl. 18.03 skrev Will Stevens  >:
> > >
> > >> I think we are planning to do something like "at least 6 months"
> > because of
> > >> the irregularity of releases.  This gives us a date (from when the
> > >> announcement was release becomes available) till the PR to remove 
gets
> > >> merged.  That PR will then be included in the next release whenever 
it
> > is.
> > >> So if the "time" is 6 months, it could actually be closer to 9 months
> > >> before it actually gets removes since the release may not be ready to
> be
> > >> cut at 6 months.
> > >>
> > >> Does this make sense?  It gives us a way to have a date alert when a
> PR
> > >> should be merged rather than trying to track which releases each
> > >> decommissioned item is targeted for, which could mess up timing if
> > there is
> > >> some long release cycles as well as short ones...
> > >>
> > >> *Will STEVENS*
> > >> Lead Developer
> > >>
> > >> 
> > >>
> > >> On Mon, Mar 27, 2017 at 9:46 AM, Daan Hoogland <
> > >> daan.hoogl...@shapeblue.com>
> > >> wrote:
> > >>
> > >> > I am against that
> > >> > Strain on the community is not in releases but in time. We already
> > >> > guarantee it is at least one minor
> > >> >
> > >> > On 27/03/17 15:24, "Erik Weber"  wrote:
> > >> >
> > >> > Personally I would be in favour of using releases rather than
> > months
> > >> > as time unit.
> > >> > Our release schedule is very unpredictable, and it's hard to
> > foresee
> > >> > how many releases we've rolled out in 6 months.
> > >> >
> > >> > Deprecate in the next (4.11?), remove a few releases later
> > (4.13?).
> > >> >
> > >> > --
> > >> > Erik
> > >> >
> > >> > On Sat, Mar 18, 2017 at 11:23 PM, Rafael Weingärtner
> > >> >  wrote:
> > >> > > Sorry the delay guys, I have been swapped these last days.
> > >> > >
> > >> > > In summary, everybody that spoke is in favor of the plugin
> > >> > retirement. I am
> > >> > > assuming that people who did not present their opinion agree
> > with
> > >> > the ones
> > >> > > presented here.
> > >> > >
> > >> > > The process to retire this plugin would be the following:
> > >> > >
> > >> > >1. Announce in our mailing lists the road map of
> retirement,
> > a
> > >> > data for
> > >> > >the final removal should be defined and presented in this
> > road
> > >> > map;
> > >> > >2. Create a Jira ticket to execute the plugin disabling 
(is
> > this
> > >> > >expression right?!), and of course, a PR to disable the
> build
> > >> > until final
> > >> > >deletion;
> > >> > >3. Create a Jira ticket to execute the final removal of 
the
> > >> > plugin. The
> > >> > >removal should only happen when the defined date comes by;
> > >> > >4. Wait patiently while time goes by….
> > >> > >5. When the time comes, create the PR and execute the
> plugin
> > >> > removal.
> > >> > >
> > >> > >
> > >> > > What date would you guys prefer to execute the plugin 
removal?
> > 3,
> > >> 6,
> > >> > or 12
> > >> > > months from now?
> > >> > > What do you think of this process? Am I missing something
> else?
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Wed, Mar 15, 2017 at 9:13 AM, Jeff Hair <

Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Rafael Weingärtner
I created a page describing the retirement process (it is general, not
Midonet specific). Could I get some feedback from you guys?

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Plugin+retirement+process


Then, I will create a proper voting thread and proceed with the steps
described in our wiki.

On Tue, Mar 28, 2017 at 9:56 AM, Gabriel Beims Bräscher <
gabrasc...@gmail.com> wrote:

> +1 on retiring midonet.
>
> 2017-03-28 10:50 GMT-03:00 Syed Ahmed :
>
> > +1 That plugin needs to go :)
> >
> > On Mon, Mar 27, 2017 at 4:36 PM, Erik Weber  wrote:
> > > Sounds good :-)
> > >
> > >
> > > Erik
> > >
> > > man. 27. mar. 2017 kl. 18.03 skrev Will Stevens  >:
> > >
> > >> I think we are planning to do something like "at least 6 months"
> > because of
> > >> the irregularity of releases.  This gives us a date (from when the
> > >> announcement was release becomes available) till the PR to remove gets
> > >> merged.  That PR will then be included in the next release whenever it
> > is.
> > >> So if the "time" is 6 months, it could actually be closer to 9 months
> > >> before it actually gets removes since the release may not be ready to
> be
> > >> cut at 6 months.
> > >>
> > >> Does this make sense?  It gives us a way to have a date alert when a
> PR
> > >> should be merged rather than trying to track which releases each
> > >> decommissioned item is targeted for, which could mess up timing if
> > there is
> > >> some long release cycles as well as short ones...
> > >>
> > >> *Will STEVENS*
> > >> Lead Developer
> > >>
> > >> 
> > >>
> > >> On Mon, Mar 27, 2017 at 9:46 AM, Daan Hoogland <
> > >> daan.hoogl...@shapeblue.com>
> > >> wrote:
> > >>
> > >> > I am against that
> > >> > Strain on the community is not in releases but in time. We already
> > >> > guarantee it is at least one minor
> > >> >
> > >> > On 27/03/17 15:24, "Erik Weber"  wrote:
> > >> >
> > >> > Personally I would be in favour of using releases rather than
> > months
> > >> > as time unit.
> > >> > Our release schedule is very unpredictable, and it's hard to
> > foresee
> > >> > how many releases we've rolled out in 6 months.
> > >> >
> > >> > Deprecate in the next (4.11?), remove a few releases later
> > (4.13?).
> > >> >
> > >> > --
> > >> > Erik
> > >> >
> > >> > On Sat, Mar 18, 2017 at 11:23 PM, Rafael Weingärtner
> > >> >  wrote:
> > >> > > Sorry the delay guys, I have been swapped these last days.
> > >> > >
> > >> > > In summary, everybody that spoke is in favor of the plugin
> > >> > retirement. I am
> > >> > > assuming that people who did not present their opinion agree
> > with
> > >> > the ones
> > >> > > presented here.
> > >> > >
> > >> > > The process to retire this plugin would be the following:
> > >> > >
> > >> > >1. Announce in our mailing lists the road map of
> retirement,
> > a
> > >> > data for
> > >> > >the final removal should be defined and presented in this
> > road
> > >> > map;
> > >> > >2. Create a Jira ticket to execute the plugin disabling (is
> > this
> > >> > >expression right?!), and of course, a PR to disable the
> build
> > >> > until final
> > >> > >deletion;
> > >> > >3. Create a Jira ticket to execute the final removal of the
> > >> > plugin. The
> > >> > >removal should only happen when the defined date comes by;
> > >> > >4. Wait patiently while time goes by….
> > >> > >5. When the time comes, create the PR and execute the
> plugin
> > >> > removal.
> > >> > >
> > >> > >
> > >> > > What date would you guys prefer to execute the plugin removal?
> > 3,
> > >> 6,
> > >> > or 12
> > >> > > months from now?
> > >> > > What do you think of this process? Am I missing something
> else?
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Wed, Mar 15, 2017 at 9:13 AM, Jeff Hair <
> j...@greenqloud.com
> > >
> > >> > wrote:
> > >> > >
> > >> > >> Complete removal of the plugin was my solution to the problem
> > of
> > >> > the jar
> > >> > >> file's dependencies. If it's not used or maintained, then it
> > >> should
> > >> > be
> > >> > >> removed, in my opinion. Disabling it in the build is a good
> > first
> > >> > step.
> > >> > >>
> > >> > >> *Jeff Hair*
> > >> > >> Technical Lead and Software Developer
> > >> > >>
> > >> > >> Tel: (+354) 415 0200
> > >> > >> j...@greenqloud.com
> > >> > >> www.greenqloud.com
> > >> > >>
> > >> > >> On Wed, Mar 15, 2017 at 8:18 AM, Rohit Yadav <
> > >> > rohit.ya...@shapeblue.com>
> > >> > >> wrote:
> > >> > >>
> > >> > >> > +1 as others have noted
> > >> > >> >
> > >> > >> >
> > >> > >> > Disable the plugin from the default build for next few
> > releases
> > >> > and
> > >> > >> > 

Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Gabriel Beims Bräscher
+1 on retiring midonet.

2017-03-28 10:50 GMT-03:00 Syed Ahmed :

> +1 That plugin needs to go :)
>
> On Mon, Mar 27, 2017 at 4:36 PM, Erik Weber  wrote:
> > Sounds good :-)
> >
> >
> > Erik
> >
> > man. 27. mar. 2017 kl. 18.03 skrev Will Stevens :
> >
> >> I think we are planning to do something like "at least 6 months"
> because of
> >> the irregularity of releases.  This gives us a date (from when the
> >> announcement was release becomes available) till the PR to remove gets
> >> merged.  That PR will then be included in the next release whenever it
> is.
> >> So if the "time" is 6 months, it could actually be closer to 9 months
> >> before it actually gets removes since the release may not be ready to be
> >> cut at 6 months.
> >>
> >> Does this make sense?  It gives us a way to have a date alert when a PR
> >> should be merged rather than trying to track which releases each
> >> decommissioned item is targeted for, which could mess up timing if
> there is
> >> some long release cycles as well as short ones...
> >>
> >> *Will STEVENS*
> >> Lead Developer
> >>
> >> 
> >>
> >> On Mon, Mar 27, 2017 at 9:46 AM, Daan Hoogland <
> >> daan.hoogl...@shapeblue.com>
> >> wrote:
> >>
> >> > I am against that
> >> > Strain on the community is not in releases but in time. We already
> >> > guarantee it is at least one minor
> >> >
> >> > On 27/03/17 15:24, "Erik Weber"  wrote:
> >> >
> >> > Personally I would be in favour of using releases rather than
> months
> >> > as time unit.
> >> > Our release schedule is very unpredictable, and it's hard to
> foresee
> >> > how many releases we've rolled out in 6 months.
> >> >
> >> > Deprecate in the next (4.11?), remove a few releases later
> (4.13?).
> >> >
> >> > --
> >> > Erik
> >> >
> >> > On Sat, Mar 18, 2017 at 11:23 PM, Rafael Weingärtner
> >> >  wrote:
> >> > > Sorry the delay guys, I have been swapped these last days.
> >> > >
> >> > > In summary, everybody that spoke is in favor of the plugin
> >> > retirement. I am
> >> > > assuming that people who did not present their opinion agree
> with
> >> > the ones
> >> > > presented here.
> >> > >
> >> > > The process to retire this plugin would be the following:
> >> > >
> >> > >1. Announce in our mailing lists the road map of retirement,
> a
> >> > data for
> >> > >the final removal should be defined and presented in this
> road
> >> > map;
> >> > >2. Create a Jira ticket to execute the plugin disabling (is
> this
> >> > >expression right?!), and of course, a PR to disable the build
> >> > until final
> >> > >deletion;
> >> > >3. Create a Jira ticket to execute the final removal of the
> >> > plugin. The
> >> > >removal should only happen when the defined date comes by;
> >> > >4. Wait patiently while time goes by….
> >> > >5. When the time comes, create the PR and execute the plugin
> >> > removal.
> >> > >
> >> > >
> >> > > What date would you guys prefer to execute the plugin removal?
> 3,
> >> 6,
> >> > or 12
> >> > > months from now?
> >> > > What do you think of this process? Am I missing something else?
> >> > >
> >> > >
> >> > >
> >> > > On Wed, Mar 15, 2017 at 9:13 AM, Jeff Hair  >
> >> > wrote:
> >> > >
> >> > >> Complete removal of the plugin was my solution to the problem
> of
> >> > the jar
> >> > >> file's dependencies. If it's not used or maintained, then it
> >> should
> >> > be
> >> > >> removed, in my opinion. Disabling it in the build is a good
> first
> >> > step.
> >> > >>
> >> > >> *Jeff Hair*
> >> > >> Technical Lead and Software Developer
> >> > >>
> >> > >> Tel: (+354) 415 0200
> >> > >> j...@greenqloud.com
> >> > >> www.greenqloud.com
> >> > >>
> >> > >> On Wed, Mar 15, 2017 at 8:18 AM, Rohit Yadav <
> >> > rohit.ya...@shapeblue.com>
> >> > >> wrote:
> >> > >>
> >> > >> > +1 as others have noted
> >> > >> >
> >> > >> >
> >> > >> > Disable the plugin from the default build for next few
> releases
> >> > and
> >> > >> > eventually deprecate/remove the plugin from the codebase. The
> >> > roadmap can
> >> > >> > look something like:
> >> > >> >
> >> > >> > - Announce on the MLs that we're planning to do this, send a
> PR
> >> > and get
> >> > >> it
> >> > >> > accepted
> >> > >> >
> >> > >> > - During the release process RM should make this information
> >> > available to
> >> > >> > everyone (including voting thread, would be nice to have a
> >> > shortlog of
> >> > >> > major changes in the voting email?)
> >> > >> >
> >> > >> > - In the release notes and release announcement, note that
> >> > midonet is no
> >> > >> > longer included 

Re: [DISCUSS] Retirement of midonet plugin

2017-03-28 Thread Syed Ahmed
+1 That plugin needs to go :)

On Mon, Mar 27, 2017 at 4:36 PM, Erik Weber  wrote:
> Sounds good :-)
>
>
> Erik
>
> man. 27. mar. 2017 kl. 18.03 skrev Will Stevens :
>
>> I think we are planning to do something like "at least 6 months" because of
>> the irregularity of releases.  This gives us a date (from when the
>> announcement was release becomes available) till the PR to remove gets
>> merged.  That PR will then be included in the next release whenever it is.
>> So if the "time" is 6 months, it could actually be closer to 9 months
>> before it actually gets removes since the release may not be ready to be
>> cut at 6 months.
>>
>> Does this make sense?  It gives us a way to have a date alert when a PR
>> should be merged rather than trying to track which releases each
>> decommissioned item is targeted for, which could mess up timing if there is
>> some long release cycles as well as short ones...
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> 
>>
>> On Mon, Mar 27, 2017 at 9:46 AM, Daan Hoogland <
>> daan.hoogl...@shapeblue.com>
>> wrote:
>>
>> > I am against that
>> > Strain on the community is not in releases but in time. We already
>> > guarantee it is at least one minor
>> >
>> > On 27/03/17 15:24, "Erik Weber"  wrote:
>> >
>> > Personally I would be in favour of using releases rather than months
>> > as time unit.
>> > Our release schedule is very unpredictable, and it's hard to foresee
>> > how many releases we've rolled out in 6 months.
>> >
>> > Deprecate in the next (4.11?), remove a few releases later (4.13?).
>> >
>> > --
>> > Erik
>> >
>> > On Sat, Mar 18, 2017 at 11:23 PM, Rafael Weingärtner
>> >  wrote:
>> > > Sorry the delay guys, I have been swapped these last days.
>> > >
>> > > In summary, everybody that spoke is in favor of the plugin
>> > retirement. I am
>> > > assuming that people who did not present their opinion agree with
>> > the ones
>> > > presented here.
>> > >
>> > > The process to retire this plugin would be the following:
>> > >
>> > >1. Announce in our mailing lists the road map of retirement, a
>> > data for
>> > >the final removal should be defined and presented in this road
>> > map;
>> > >2. Create a Jira ticket to execute the plugin disabling (is this
>> > >expression right?!), and of course, a PR to disable the build
>> > until final
>> > >deletion;
>> > >3. Create a Jira ticket to execute the final removal of the
>> > plugin. The
>> > >removal should only happen when the defined date comes by;
>> > >4. Wait patiently while time goes by….
>> > >5. When the time comes, create the PR and execute the plugin
>> > removal.
>> > >
>> > >
>> > > What date would you guys prefer to execute the plugin removal? 3,
>> 6,
>> > or 12
>> > > months from now?
>> > > What do you think of this process? Am I missing something else?
>> > >
>> > >
>> > >
>> > > On Wed, Mar 15, 2017 at 9:13 AM, Jeff Hair 
>> > wrote:
>> > >
>> > >> Complete removal of the plugin was my solution to the problem of
>> > the jar
>> > >> file's dependencies. If it's not used or maintained, then it
>> should
>> > be
>> > >> removed, in my opinion. Disabling it in the build is a good first
>> > step.
>> > >>
>> > >> *Jeff Hair*
>> > >> Technical Lead and Software Developer
>> > >>
>> > >> Tel: (+354) 415 0200
>> > >> j...@greenqloud.com
>> > >> www.greenqloud.com
>> > >>
>> > >> On Wed, Mar 15, 2017 at 8:18 AM, Rohit Yadav <
>> > rohit.ya...@shapeblue.com>
>> > >> wrote:
>> > >>
>> > >> > +1 as others have noted
>> > >> >
>> > >> >
>> > >> > Disable the plugin from the default build for next few releases
>> > and
>> > >> > eventually deprecate/remove the plugin from the codebase. The
>> > roadmap can
>> > >> > look something like:
>> > >> >
>> > >> > - Announce on the MLs that we're planning to do this, send a PR
>> > and get
>> > >> it
>> > >> > accepted
>> > >> >
>> > >> > - During the release process RM should make this information
>> > available to
>> > >> > everyone (including voting thread, would be nice to have a
>> > shortlog of
>> > >> > major changes in the voting email?)
>> > >> >
>> > >> > - In the release notes and release announcement, note that
>> > midonet is no
>> > >> > longer included in the default build and is planned to be
>> > deprecated
>> > >> >
>> > >> > - By end of the year, if we've no communication received
>> > deprecate and
>> > >> > remove the plugin with an announcement
>> > >> >
>> > >> >
>> > >> > I think this should be done with Midonet and any other plugins
>> > that are
>> > >> > causing issues and are no 

[GitHub] cloudstack issue #2001: CLOUDSTACK-9830 Fix DST bug in QuotaAlertManagerTest

2017-03-28 Thread nathanejohnson
Github user nathanejohnson commented on the issue:

https://github.com/apache/cloudstack/pull/2001
  
@rthyd I'm not sure if this was a joda time bug or (more likely) misuse of 
joda time.  I'm not even sure how best to verify that.  All I know is that when 
using the java data methods the issue went away.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has 
been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@serg38 are you reading my mind somehow? :)
@blueorangutan test centos7 vmware-55u3


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov Can you also kick off vmware test in parallel?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@blueorangutan test centos7 xenserver-65sp1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) 
has been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-602


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-03-28 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2020: IR25: WIP

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/2020
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-601


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2019: CLOUDSTACK-9851 travis CI build failure after merge ...

2017-03-28 Thread SudharmaJain
Github user SudharmaJain commented on the issue:

https://github.com/apache/cloudstack/pull/2019
  
@koushik-das As suggested by you, I have  added changes to correct the 
maxDataVolume limits.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2020: IR25: WIP

2017-03-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/2020
  
@abhinandanprateek a Jenkins job has been kicked to build packages. I'll 
keep you posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2020: IR25: WIP

2017-03-28 Thread abhinandanprateek
Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/2020
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #2020: IR25: WIP

2017-03-28 Thread abhinandanprateek
GitHub user abhinandanprateek opened a pull request:

https://github.com/apache/cloudstack/pull/2020

IR25: WIP



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack ir25

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2020.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2020


commit ae959c110f3237b8823d227c390f74cc9d692653
Author: Abhinandan Prateek 
Date:   2017-03-28T12:07:59Z

IR25: WIP




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


4.9.2 Issue with RvR and redundant state

2017-03-28 Thread Patrick W .
Hi All,

Following a migration to 4.9.2 from 4.5.2, xen 6.5, all RvR are unable to 
properly handle their redundant state. Issues are well described here:
https://issues.apache.org/jira/browse/CLOUDSTACK-9385
https://issues.apache.org/jira/browse/CLOUDSTACK-9692

And addressed in this PR, mostly probably coming with the next release (if 
someone can confirm?):
https://github.com/apache/cloudstack/pull/1871

We were massive users of RvR and missed this bug, it could be worth adding it 
in the known issues of the release notes.

cheers


[GitHub] cloudstack issue #2001: CLOUDSTACK-9830 Fix DST bug in QuotaAlertManagerTest

2017-03-28 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/2001
  
@nathanejohnson can we continue to use jodatime but you can fix your issue?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2001: CLOUDSTACK-9830 Fix DST bug in QuotaAlertManagerTest

2017-03-28 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/2001
  
LGTM. @abhinandanprateek ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-03-28 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@karuturi I'm working on the changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1996: CLOUDSTACK-9099: SecretKey is returned from the APIs

2017-03-28 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1996
  
@jayapalu this is a useful security fix for 4.9 as well, can you please 
rebase against the 4.9 branch and edit the base branch of the PR to 4.9?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2019: CLOUDSTACK-9851 travis CI build failure after merge ...

2017-03-28 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/2019
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2019: CLOUDSTACK-9851 travis CI build failure after merge ...

2017-03-28 Thread SudharmaJain
Github user SudharmaJain commented on the issue:

https://github.com/apache/cloudstack/pull/2019
  
Thanks @koushik-das for reviewing this. So Let me close this and root 
volume detach should also be fixed as a part of PR #1953.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1844: CLOUDSTACK-9668 : disksizeallocated of PrimaryStorag...

2017-03-28 Thread jayakarteek
Github user jayakarteek commented on the issue:

https://github.com/apache/cloudstack/pull/1844
  
Tested the scenario.
results are as below
select * from storage_pool;
# id, name, uuid, pool_type, port, data_center_id, pod_id, cluster_id, 
used_bytes, capacity_bytes, host_address, user_info, path, created, removed, 
update_time, status, storage_provider_name, scope, hypervisor, managed, 
capacity_iops
'11', 'pr_2', '80a753bb-6fad-3938-b657-ae62f6119804', 'NetworkFilesystem', 
'2049', '1', NULL, NULL, '4140196372480', '590228480', '10.147.28.7', NULL, 
'/export/home/jayakarteek/PR', '2017-03-28 05:54:28', NULL, NULL, 'Up', 
'DefaultPrimary', 'ZONE', 'VMware', '0', NULL

select * from  op_host_capacity;   **After attaching disk** 
id, host_id, data_center_id, pod_id, cluster_id, used_capacity, 
reserved_capacity, total_capacity, capacity_type, capacity_state, update_time, 
created
'11', '11', '1', NULL, NULL, '18253611008', '0', '1180456960', '3', 
'Enabled', '2017-03-28 05:57:58', '2017-03-28 05:57:58'
select * from  op_host_capacity;   **After detaching and deleting the disk**
# id, host_id, data_center_id, pod_id, cluster_id, used_capacity, 
reserved_capacity, total_capacity, capacity_type, capacity_state, update_time, 
created
'11', '11', '1', NULL, NULL, '0', '0', '1180456960', '3', 'Enabled', 
'2017-03-28 06:22:58', '2017-03-28 05:57:58'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1960: [4.11/Future] CLOUDSTACK-9782: Host HA and KVM HA pr...

2017-03-28 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1960
  
@rhtyd there seems to be a conflict for this merge, I'm currently running 
tests and will keep you posted


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-03-28 Thread koushik-das
Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@karuturi This PR is not correct and is resulting in travis CI failures. It 
needs to be properly fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2019: CLOUDSTACK-9851 travis CI build failure after merge ...

2017-03-28 Thread koushik-das
Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/2019
  
@SudharmaJain Although the PR fixes the tests but it is not correct. I 
looked at PR #1953 and the changes there are incorrect.

getMaxDataVolumes() returns only the data volume count by excluding the 
root and cd-rom device. That PR is again subtracting them which results in 
incorrect limit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---