Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Tim Mackey
> We found that we can use xenstore-read / xenstore-write to send data from
dom0 to domU which are in our case  VRs or SVMs. Any reason not using this
approach ?

xenstore has had some issues in the past. The most notable of which were
limitations on the number of event channels in use, followed by overall
performance impact. iirc, the event channel stuff was fully resolved with
XenServer 6.5, but they do speak to a need to test if there are any changes
to the maximum number of VMs which can be reliably supported. It also
limits legacy support (in case that matters).

Architecturally I think this is a reasonable approach to the problem. One
other thing to note is that xapi replicates xenstore information to all
members of a pool. That might impact RVRs.

-tim

[1] "xenstore is not a high-performance facility and should beused only for
small amounts of control plane data."
https://xenbits.xen.org/docs/4.6-testing/misc/xenstore.txt

On Fri, Jan 12, 2018 at 4:56 PM, Pierre-Luc Dion  wrote:

> After some verification with Syed and Khosrow,
>
> We found that we can use xenstore-read / xenstore-write to send data from
> dom0 to domU which are in our case  VRs or SVMs. Any reason not using this
> approach ?  that way we would not need a architectural change for XenServer
> pods, and this would support HVM and PV virtual-router. more test required,
> for sure, VR would need to have xentools pre-installed.
>
>
> *Pierre-Luc DION*
> Architecte de Solution Cloud | Cloud Solutions Architect
> t 855.652.5683
>
> *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Jan 12, 2018 at 4:07 PM, Syed Ahmed  wrote:
>
> > KVM uses a VirtIO channel to send information about the IP address and
> > other params to the SystemVMs. We could use a similar strategy in
> XenServer
> > using XenStore. This would involve minimal changes to the code while
> > keeping backward compatibility.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller 
> > wrote:
> >
> > > They do not. They receive a link-local ip address that is used for host
> > > agent to VR communication. All VR commands are proxied through the host
> > > agent. Host agent to VR communication is over SSH.
> > >
> > >
> > > 
> > > From: Rafael Weingärtner 
> > > Sent: Friday, January 12, 2018 1:42 PM
> > > To: dev
> > > Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
> > >
> > > but we are already using this design in vmware deployments (not sure
> > about
> > > KVM). The management network is already an isolated network only used
> by
> > > system vms and ACS. Unless we are attacked by some internal agent, we
> are
> > > safe from customer attack through management networks. Also, we can (if
> > we
> > > don't do yet) restrict access only via these management interfaces in
> > > system VMs(VRs, SSVM, console proxy and others to come).
> > >
> > >
> > >
> > > Can someone confirm if VRs receive management IPs in KVM deployments?
> > >
> > > On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed 
> wrote:
> > >
> > > > The reason why we used link local in the first place was to isolate
> the
> > > VR
> > > > from directly accessing the management network. This provides another
> > > layer
> > > > of security in case of a VR exploit. This will also have a side
> effect
> > of
> > > > making all VRs visible to each other. Are we okay accepting this?
> > > >
> > > > Thanks,
> > > > -Syed
> > > >
> > > > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey 
> > wrote:
> > > >
> > > > > dom0 already has a DHCP server listening for requests on internal
> > > > > management networks. I'd be wary trying to manage it from an
> external
> > > > > service like cloudstack lest it get reset upon XenServer patch.
> This
> > > > alone
> > > > > makes me favor option #2. I also think option #2 simplifies network
> > > > design
> > > > > for users.
> > > > >
> > > > > Agreed on making this as consistent across flows as possible.
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > > > rafaelweingart...@gmail.com> wrote:
> > > > >
> > > > > > It looks reasonable to manage VRs via management IP network. We
> > > should
> > > > > > focus on using the same work flow for different deployment
> > scenarios.
> > > > > >
> > > > > >
> > > > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> > > pd...@cloudops.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > We need to start a architecture discussion about running
> SystemVM
> > > and
> > > > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > > > Meltdown-Spectre,
> > > > > > > one of the mitigation step is currently to run VMs as HVM on
> > > > XenServer
> > > > > to
> 

Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Pierre-Luc Dion
After some verification with Syed and Khosrow,

We found that we can use xenstore-read / xenstore-write to send data from
dom0 to domU which are in our case  VRs or SVMs. Any reason not using this
approach ?  that way we would not need a architectural change for XenServer
pods, and this would support HVM and PV virtual-router. more test required,
for sure, VR would need to have xentools pre-installed.


*Pierre-Luc DION*
Architecte de Solution Cloud | Cloud Solutions Architect
t 855.652.5683

*CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Jan 12, 2018 at 4:07 PM, Syed Ahmed  wrote:

> KVM uses a VirtIO channel to send information about the IP address and
> other params to the SystemVMs. We could use a similar strategy in XenServer
> using XenStore. This would involve minimal changes to the code while
> keeping backward compatibility.
>
>
>
> On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller 
> wrote:
>
> > They do not. They receive a link-local ip address that is used for host
> > agent to VR communication. All VR commands are proxied through the host
> > agent. Host agent to VR communication is over SSH.
> >
> >
> > 
> > From: Rafael Weingärtner 
> > Sent: Friday, January 12, 2018 1:42 PM
> > To: dev
> > Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
> >
> > but we are already using this design in vmware deployments (not sure
> about
> > KVM). The management network is already an isolated network only used by
> > system vms and ACS. Unless we are attacked by some internal agent, we are
> > safe from customer attack through management networks. Also, we can (if
> we
> > don't do yet) restrict access only via these management interfaces in
> > system VMs(VRs, SSVM, console proxy and others to come).
> >
> >
> >
> > Can someone confirm if VRs receive management IPs in KVM deployments?
> >
> > On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed  wrote:
> >
> > > The reason why we used link local in the first place was to isolate the
> > VR
> > > from directly accessing the management network. This provides another
> > layer
> > > of security in case of a VR exploit. This will also have a side effect
> of
> > > making all VRs visible to each other. Are we okay accepting this?
> > >
> > > Thanks,
> > > -Syed
> > >
> > > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey 
> wrote:
> > >
> > > > dom0 already has a DHCP server listening for requests on internal
> > > > management networks. I'd be wary trying to manage it from an external
> > > > service like cloudstack lest it get reset upon XenServer patch. This
> > > alone
> > > > makes me favor option #2. I also think option #2 simplifies network
> > > design
> > > > for users.
> > > >
> > > > Agreed on making this as consistent across flows as possible.
> > > >
> > > >
> > > >
> > > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > > rafaelweingart...@gmail.com> wrote:
> > > >
> > > > > It looks reasonable to manage VRs via management IP network. We
> > should
> > > > > focus on using the same work flow for different deployment
> scenarios.
> > > > >
> > > > >
> > > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> > pd...@cloudops.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > We need to start a architecture discussion about running SystemVM
> > and
> > > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > > Meltdown-Spectre,
> > > > > > one of the mitigation step is currently to run VMs as HVM on
> > > XenServer
> > > > to
> > > > > > self contain a user space attack from a guest OS.
> > > > > >
> > > > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> > > start
> > > > > has
> > > > > > HVM. This is currently problematic for Virtual Routers and
> SystemVM
> > > > > because
> > > > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > > > accessible
> > > > > > to the VM so the VR fail to be properly configured.
> > > > > >
> > > > > > I currently see 2 potential approaches for this:
> > > > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0
> would
> > > > > receive
> > > > > > is network configuration at boot.
> > > > > > 2. Change the current way of managing VR, SVMs on XenServer,
> > > potentiall
> > > > > do
> > > > > > same has with VMware: use pod management networks and assign a
> POD
> > IP
> > > > to
> > > > > > each VR.
> > > > > >
> > > > > > I don't know how it's implemented in KVM, maybe cloning KVM
> > approach
> > > > > would
> > > > > > work too, could someone explain how it work on this thread?
> > > > > >
> > > > > > I'd a bit fan of a potential #2 aproach because it could
> facilitate
> > > VR
> 

Re: [DISCUSS] Freezing master for 4.11

2018-01-12 Thread Tutkowski, Mike
I’m investigating these now. I have found and fixed two of them so far.

> On Jan 12, 2018, at 2:49 PM, Rohit Yadav  wrote:
> 
> Thanks Rafael and Daan.
> 
> 
>> From: Rafael Weingärtner 
>> 
>> I believe there is no problem in merging Wido’s and Mike’s PRs, they have
>> been extensively discussed and improved (specially Mike’s one).
> 
> Thanks, Mike's PR has several regression smoketest failures and can be 
> accepted only when those failures are fixed.
> 
> We'll cut 4.11 branch start rc1 on Monday that would be a hard freeze. If 
> Mike wants, he can help fix them over the weekend, I can help run smoketests.
> 
>> Having said that; I would be ok with it (no need to revert it), but we need
>> to be more careful with these things. If one wants to merge something,
>> there is no harm in waiting and calling for reviewers via Github, Slack, or
>> even email them directly.
> 
> Additional review was requested, but mea culpa - thanks for your support, 
> noted.
> 
> - Rohit
> 
> On Fri, Jan 12, 2018 at 3:57 PM, Rohit Yadav 
> wrote:
> 
>> All,
>> 
>> 
>> We're down to one feature PR towards 4.11 milestone now:
>> 
>> https://github.com/apache/cloudstack/pull/2298
>> 
>> 
>> The config drive PR from Frank (Nuage) has been accepted today after no
>> regression test failures seen from yesterday's smoketest run. We've also
>> tested, reviewed and merge Wido's (blocker fix) PR.
>> 
>> 
>> I've asked Mike to stabilize the branch; based on the smoketest results
>> from today we can see some failures caused by the PR. I'm willing to work
>> with Mike and others to get this PR tested, and merged over the weekends if
>> we can demonstrate that no regression is caused by it, i.e. no new
>> smoketest regressions. I'll also try to fix regression and test failures
>> over the weekend.
>> 
>> 
>> Lastly, I would like to discuss a mistake I made today with merging the
>> following PR which per our guideline lacks one code review lgtm/approval:
>> 
>> https://github.com/apache/cloudstack/pull/2152
>> 
>> 
>> The changes in above (merged) PR are all localized to a xenserver-swift
>> file, that is not tested by Travis or Trillian, since no new regression
>> failures were seen I accepted and merge it on that discretion. The PR was
>> originally on the 4.11 milestone, however, due to it lacking a JIRA id and
>> no response from the author it was only recently removed from the milestone.
>> 
>> 
>> Please advise if I need to revert this, or we can review/lgtm it
>> post-merge? I'll also ping on the above PR.
>> 
>> 
>> - Rohit
>> 
>> 
> Apache CloudStack: Open Source Cloud Computing
> cloudstack.apache.org
> CloudStack is open source cloud computing software for creating, managing, 
> and deploying infrastructure cloud services
> 
> 
> 
>> 
>> 
>> 
>> 
>> From: Wido den Hollander 
>> Sent: Thursday, January 11, 2018 9:17:26 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: [DISCUSS] Freezing master for 4.11
>> 
>> 
>> 
>>> On 01/10/2018 07:26 PM, Daan Hoogland wrote:
>>> I hope we understand each other correctly: No-one running an earlier
>>> version then 4.11 should miss out on any functionality they are using
>> now.
>>> 
>>> So if you use ipv6 and multiple cidrs now it must continue to work with
>> no
>>> loss of functionality. see my question below.
>>> 
>>> On Wed, Jan 10, 2018 at 7:06 PM, Ivan Kudryavtsev <
>> kudryavtsev...@bw-sw.com>
>>> wrote:
>>> 
 Daan, yes this sounds reasonable, I suppose who would like to fix, could
 do custom build for himself...
 
 But still it should be aknowledged somehow, if you use several cidrs for
 network, don't use v6, or don't upgrade to 4.11 because things will stop
 running well.
 
>>> Does this mean that several cidrs in ipv6 works in 4.9 and not in 4.11?
>>> 
>> 
>> No, it doesn't. IPv6 was introduced in 4.10 and this broke in 4.10.
>> 
>> You can't run with 4.10 with multiple IPv4 CIDRs as well when you have
>> IPv6 enabled.
>> 
>> So this is broken in 4.10 and 4.11 in that case.
>> 
>> Wido
>> 
>>> 
>>> if yes; it is a blocker
>>> 
>>> if no; you might as well upgrade for other features as it doesn't work
>> now
>>> either.
>>> 
>> 
>> rohit.ya...@shapeblue.com
>> www.shapeblue.com
> [http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]
> 
> Shapeblue - The CloudStack Company
> www.shapeblue.com
> Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
> framework developed by ShapeBlue to deliver the rapid deployment of a 
> standardised ...
> 
> 
> 
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>> 
>> 
>> 
>> 
> 
> 
> --
> Rafael Weingärtner
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, 

Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Syed Ahmed
KVM uses a VirtIO channel to send information about the IP address and
other params to the SystemVMs. We could use a similar strategy in XenServer
using XenStore. This would involve minimal changes to the code while
keeping backward compatibility.



On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller 
wrote:

> They do not. They receive a link-local ip address that is used for host
> agent to VR communication. All VR commands are proxied through the host
> agent. Host agent to VR communication is over SSH.
>
>
> 
> From: Rafael Weingärtner 
> Sent: Friday, January 12, 2018 1:42 PM
> To: dev
> Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
>
> but we are already using this design in vmware deployments (not sure about
> KVM). The management network is already an isolated network only used by
> system vms and ACS. Unless we are attacked by some internal agent, we are
> safe from customer attack through management networks. Also, we can (if we
> don't do yet) restrict access only via these management interfaces in
> system VMs(VRs, SSVM, console proxy and others to come).
>
>
>
> Can someone confirm if VRs receive management IPs in KVM deployments?
>
> On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed  wrote:
>
> > The reason why we used link local in the first place was to isolate the
> VR
> > from directly accessing the management network. This provides another
> layer
> > of security in case of a VR exploit. This will also have a side effect of
> > making all VRs visible to each other. Are we okay accepting this?
> >
> > Thanks,
> > -Syed
> >
> > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey  wrote:
> >
> > > dom0 already has a DHCP server listening for requests on internal
> > > management networks. I'd be wary trying to manage it from an external
> > > service like cloudstack lest it get reset upon XenServer patch. This
> > alone
> > > makes me favor option #2. I also think option #2 simplifies network
> > design
> > > for users.
> > >
> > > Agreed on making this as consistent across flows as possible.
> > >
> > >
> > >
> > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > rafaelweingart...@gmail.com> wrote:
> > >
> > > > It looks reasonable to manage VRs via management IP network. We
> should
> > > > focus on using the same work flow for different deployment scenarios.
> > > >
> > > >
> > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> pd...@cloudops.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > We need to start a architecture discussion about running SystemVM
> and
> > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > Meltdown-Spectre,
> > > > > one of the mitigation step is currently to run VMs as HVM on
> > XenServer
> > > to
> > > > > self contain a user space attack from a guest OS.
> > > > >
> > > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> > start
> > > > has
> > > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > > because
> > > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > > accessible
> > > > > to the VM so the VR fail to be properly configured.
> > > > >
> > > > > I currently see 2 potential approaches for this:
> > > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > > receive
> > > > > is network configuration at boot.
> > > > > 2. Change the current way of managing VR, SVMs on XenServer,
> > potentiall
> > > > do
> > > > > same has with VMware: use pod management networks and assign a POD
> IP
> > > to
> > > > > each VR.
> > > > >
> > > > > I don't know how it's implemented in KVM, maybe cloning KVM
> approach
> > > > would
> > > > > work too, could someone explain how it work on this thread?
> > > > >
> > > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> > VR
> > > > > monitoring and logging, although a migration path for an existing
> > cloud
> > > > > could be complex.
> > > > >
> > > > > Cheers,
> > > > >
> > > > >
> > > > > Pierre-Luc
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Rafael Weingärtner
> > > >
> > >
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Simon Weller
They do not. They receive a link-local ip address that is used for host agent 
to VR communication. All VR commands are proxied through the host agent. Host 
agent to VR communication is over SSH.



From: Rafael Weingärtner 
Sent: Friday, January 12, 2018 1:42 PM
To: dev
Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer

but we are already using this design in vmware deployments (not sure about
KVM). The management network is already an isolated network only used by
system vms and ACS. Unless we are attacked by some internal agent, we are
safe from customer attack through management networks. Also, we can (if we
don't do yet) restrict access only via these management interfaces in
system VMs(VRs, SSVM, console proxy and others to come).



Can someone confirm if VRs receive management IPs in KVM deployments?

On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed  wrote:

> The reason why we used link local in the first place was to isolate the VR
> from directly accessing the management network. This provides another layer
> of security in case of a VR exploit. This will also have a side effect of
> making all VRs visible to each other. Are we okay accepting this?
>
> Thanks,
> -Syed
>
> On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey  wrote:
>
> > dom0 already has a DHCP server listening for requests on internal
> > management networks. I'd be wary trying to manage it from an external
> > service like cloudstack lest it get reset upon XenServer patch. This
> alone
> > makes me favor option #2. I also think option #2 simplifies network
> design
> > for users.
> >
> > Agreed on making this as consistent across flows as possible.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > It looks reasonable to manage VRs via management IP network. We should
> > > focus on using the same work flow for different deployment scenarios.
> > >
> > >
> > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We need to start a architecture discussion about running SystemVM and
> > > > Virtual-Router as HVM instances in XenServer. With recent
> > > Meltdown-Spectre,
> > > > one of the mitigation step is currently to run VMs as HVM on
> XenServer
> > to
> > > > self contain a user space attack from a guest OS.
> > > >
> > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> start
> > > has
> > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > because
> > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > accessible
> > > > to the VM so the VR fail to be properly configured.
> > > >
> > > > I currently see 2 potential approaches for this:
> > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > receive
> > > > is network configuration at boot.
> > > > 2. Change the current way of managing VR, SVMs on XenServer,
> potentiall
> > > do
> > > > same has with VMware: use pod management networks and assign a POD IP
> > to
> > > > each VR.
> > > >
> > > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > > would
> > > > work too, could someone explain how it work on this thread?
> > > >
> > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> VR
> > > > monitoring and logging, although a migration path for an existing
> cloud
> > > > could be complex.
> > > >
> > > > Cheers,
> > > >
> > > >
> > > > Pierre-Luc
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



--
Rafael Weingärtner


Re: [DISCUSS] Freezing master for 4.11

2018-01-12 Thread Rohit Yadav
Thanks Rafael and Daan.


> From: Rafael Weingärtner 
>
>I believe there is no problem in merging Wido’s and Mike’s PRs, they have
>been extensively discussed and improved (specially Mike’s one).

Thanks, Mike's PR has several regression smoketest failures and can be accepted 
only when those failures are fixed.

We'll cut 4.11 branch start rc1 on Monday that would be a hard freeze. If Mike 
wants, he can help fix them over the weekend, I can help run smoketests.

>Having said that; I would be ok with it (no need to revert it), but we need
>to be more careful with these things. If one wants to merge something,
>there is no harm in waiting and calling for reviewers via Github, Slack, or
>even email them directly.

Additional review was requested, but mea culpa - thanks for your support, noted.

- Rohit

On Fri, Jan 12, 2018 at 3:57 PM, Rohit Yadav 
wrote:

> All,
>
>
> We're down to one feature PR towards 4.11 milestone now:
>
> https://github.com/apache/cloudstack/pull/2298
>
>
> The config drive PR from Frank (Nuage) has been accepted today after no
> regression test failures seen from yesterday's smoketest run. We've also
> tested, reviewed and merge Wido's (blocker fix) PR.
>
>
> I've asked Mike to stabilize the branch; based on the smoketest results
> from today we can see some failures caused by the PR. I'm willing to work
> with Mike and others to get this PR tested, and merged over the weekends if
> we can demonstrate that no regression is caused by it, i.e. no new
> smoketest regressions. I'll also try to fix regression and test failures
> over the weekend.
>
>
> Lastly, I would like to discuss a mistake I made today with merging the
> following PR which per our guideline lacks one code review lgtm/approval:
>
> https://github.com/apache/cloudstack/pull/2152
>
>
> The changes in above (merged) PR are all localized to a xenserver-swift
> file, that is not tested by Travis or Trillian, since no new regression
> failures were seen I accepted and merge it on that discretion. The PR was
> originally on the 4.11 milestone, however, due to it lacking a JIRA id and
> no response from the author it was only recently removed from the milestone.
>
>
> Please advise if I need to revert this, or we can review/lgtm it
> post-merge? I'll also ping on the above PR.
>
>
> - Rohit
>
> 
Apache CloudStack: Open Source Cloud Computing
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services



>
>
>
> 
> From: Wido den Hollander 
> Sent: Thursday, January 11, 2018 9:17:26 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] Freezing master for 4.11
>
>
>
> On 01/10/2018 07:26 PM, Daan Hoogland wrote:
> > I hope we understand each other correctly: No-one running an earlier
> > version then 4.11 should miss out on any functionality they are using
> now.
> >
> > So if you use ipv6 and multiple cidrs now it must continue to work with
> no
> > loss of functionality. see my question below.
> >
> > On Wed, Jan 10, 2018 at 7:06 PM, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > wrote:
> >
> >> Daan, yes this sounds reasonable, I suppose who would like to fix, could
> >> do custom build for himself...
> >>
> >> But still it should be aknowledged somehow, if you use several cidrs for
> >> network, don't use v6, or don't upgrade to 4.11 because things will stop
> >> running well.
> >>
> > Does this mean that several cidrs in ipv6 works in 4.9 and not in 4.11?
> >
>
> No, it doesn't. IPv6 was introduced in 4.10 and this broke in 4.10.
>
> You can't run with 4.10 with multiple IPv4 CIDRs as well when you have
> IPv6 enabled.
>
> So this is broken in 4.10 and 4.11 in that case.
>
> Wido
>
> >
> > if yes; it is a blocker
> >
> > if no; you might as well upgrade for other features as it doesn't work
> now
> > either.
> >
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


--
Rafael Weingärtner

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Rafael Weingärtner
but we are already using this design in vmware deployments (not sure about
KVM). The management network is already an isolated network only used by
system vms and ACS. Unless we are attacked by some internal agent, we are
safe from customer attack through management networks. Also, we can (if we
don't do yet) restrict access only via these management interfaces in
system VMs(VRs, SSVM, console proxy and others to come).



Can someone confirm if VRs receive management IPs in KVM deployments?

On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed  wrote:

> The reason why we used link local in the first place was to isolate the VR
> from directly accessing the management network. This provides another layer
> of security in case of a VR exploit. This will also have a side effect of
> making all VRs visible to each other. Are we okay accepting this?
>
> Thanks,
> -Syed
>
> On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey  wrote:
>
> > dom0 already has a DHCP server listening for requests on internal
> > management networks. I'd be wary trying to manage it from an external
> > service like cloudstack lest it get reset upon XenServer patch. This
> alone
> > makes me favor option #2. I also think option #2 simplifies network
> design
> > for users.
> >
> > Agreed on making this as consistent across flows as possible.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > It looks reasonable to manage VRs via management IP network. We should
> > > focus on using the same work flow for different deployment scenarios.
> > >
> > >
> > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We need to start a architecture discussion about running SystemVM and
> > > > Virtual-Router as HVM instances in XenServer. With recent
> > > Meltdown-Spectre,
> > > > one of the mitigation step is currently to run VMs as HVM on
> XenServer
> > to
> > > > self contain a user space attack from a guest OS.
> > > >
> > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> start
> > > has
> > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > because
> > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > accessible
> > > > to the VM so the VR fail to be properly configured.
> > > >
> > > > I currently see 2 potential approaches for this:
> > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > receive
> > > > is network configuration at boot.
> > > > 2. Change the current way of managing VR, SVMs on XenServer,
> potentiall
> > > do
> > > > same has with VMware: use pod management networks and assign a POD IP
> > to
> > > > each VR.
> > > >
> > > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > > would
> > > > work too, could someone explain how it work on this thread?
> > > >
> > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> VR
> > > > monitoring and logging, although a migration path for an existing
> cloud
> > > > could be complex.
> > > >
> > > > Cheers,
> > > >
> > > >
> > > > Pierre-Luc
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



-- 
Rafael Weingärtner


Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Syed Ahmed
The reason why we used link local in the first place was to isolate the VR
from directly accessing the management network. This provides another layer
of security in case of a VR exploit. This will also have a side effect of
making all VRs visible to each other. Are we okay accepting this?

Thanks,
-Syed

On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey  wrote:

> dom0 already has a DHCP server listening for requests on internal
> management networks. I'd be wary trying to manage it from an external
> service like cloudstack lest it get reset upon XenServer patch. This alone
> makes me favor option #2. I also think option #2 simplifies network design
> for users.
>
> Agreed on making this as consistent across flows as possible.
>
>
>
> On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> > It looks reasonable to manage VRs via management IP network. We should
> > focus on using the same work flow for different deployment scenarios.
> >
> >
> > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion 
> > wrote:
> >
> > > Hi,
> > >
> > > We need to start a architecture discussion about running SystemVM and
> > > Virtual-Router as HVM instances in XenServer. With recent
> > Meltdown-Spectre,
> > > one of the mitigation step is currently to run VMs as HVM on XenServer
> to
> > > self contain a user space attack from a guest OS.
> > >
> > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start
> > has
> > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > because
> > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > cloud_link_local. While using HVM the "OS boot Options" is not
> accessible
> > > to the VM so the VR fail to be properly configured.
> > >
> > > I currently see 2 potential approaches for this:
> > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > receive
> > > is network configuration at boot.
> > > 2. Change the current way of managing VR, SVMs on XenServer, potentiall
> > do
> > > same has with VMware: use pod management networks and assign a POD IP
> to
> > > each VR.
> > >
> > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > would
> > > work too, could someone explain how it work on this thread?
> > >
> > > I'd a bit fan of a potential #2 aproach because it could facilitate VR
> > > monitoring and logging, although a migration path for an existing cloud
> > > could be complex.
> > >
> > > Cheers,
> > >
> > >
> > > Pierre-Luc
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>


Re: [DISCUSS] Freezing master for 4.11

2018-01-12 Thread Daan Hoogland
i answers something similar to Rafael's answer here, on the PR itself.

On Fri, Jan 12, 2018 at 7:21 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I believe there is no problem in merging Wido’s and Mike’s PRs, they have
> been extensively discussed and improved (specially Mike’s one).
>
> I noticed the merge of #2152 today morning, but crying over spilled milk
> does not help anything… The code seems to be ok, maybe those variable names
> `cmd` and `cmd2` could benefit from something more descriptive; also, the
> magic number `1024`.
>
> Having said that; I would be ok with it (no need to revert it), but we need
> to be more careful with these things. If one wants to merge something,
> there is no harm in waiting and calling for reviewers via Github, Slack, or
> even email them directly.
>
> On Fri, Jan 12, 2018 at 3:57 PM, Rohit Yadav 
> wrote:
>
> > All,
> >
> >
> > We're down to one feature PR towards 4.11 milestone now:
> >
> > https://github.com/apache/cloudstack/pull/2298
> >
> >
> > The config drive PR from Frank (Nuage) has been accepted today after no
> > regression test failures seen from yesterday's smoketest run. We've also
> > tested, reviewed and merge Wido's (blocker fix) PR.
> >
> >
> > I've asked Mike to stabilize the branch; based on the smoketest results
> > from today we can see some failures caused by the PR. I'm willing to work
> > with Mike and others to get this PR tested, and merged over the weekends
> if
> > we can demonstrate that no regression is caused by it, i.e. no new
> > smoketest regressions. I'll also try to fix regression and test failures
> > over the weekend.
> >
> >
> > Lastly, I would like to discuss a mistake I made today with merging the
> > following PR which per our guideline lacks one code review lgtm/approval:
> >
> > https://github.com/apache/cloudstack/pull/2152
> >
> >
> > The changes in above (merged) PR are all localized to a xenserver-swift
> > file, that is not tested by Travis or Trillian, since no new regression
> > failures were seen I accepted and merge it on that discretion. The PR was
> > originally on the 4.11 milestone, however, due to it lacking a JIRA id
> and
> > no response from the author it was only recently removed from the
> milestone.
> >
> >
> > Please advise if I need to revert this, or we can review/lgtm it
> > post-merge? I'll also ping on the above PR.
> >
> >
> > - Rohit
> >
> > 
> >
> >
> >
> > 
> > From: Wido den Hollander 
> > Sent: Thursday, January 11, 2018 9:17:26 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: [DISCUSS] Freezing master for 4.11
> >
> >
> >
> > On 01/10/2018 07:26 PM, Daan Hoogland wrote:
> > > I hope we understand each other correctly: No-one running an earlier
> > > version then 4.11 should miss out on any functionality they are using
> > now.
> > >
> > > So if you use ipv6 and multiple cidrs now it must continue to work with
> > no
> > > loss of functionality. see my question below.
> > >
> > > On Wed, Jan 10, 2018 at 7:06 PM, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com>
> > > wrote:
> > >
> > >> Daan, yes this sounds reasonable, I suppose who would like to fix,
> could
> > >> do custom build for himself...
> > >>
> > >> But still it should be aknowledged somehow, if you use several cidrs
> for
> > >> network, don't use v6, or don't upgrade to 4.11 because things will
> stop
> > >> running well.
> > >>
> > > ​Does this mean that several cidrs in ipv6 works in 4.9 and not in
> 4.11?
> > >
> >
> > No, it doesn't. IPv6 was introduced in 4.10 and this broke in 4.10.
> >
> > You can't run with 4.10 with multiple IPv4 CIDRs as well when you have
> > IPv6 enabled.
> >
> > So this is broken in 4.10 and 4.11 in that case.
> >
> > Wido
> >
> > >
> > > if yes; it is a blocker
> > >
> > > if no; you might as well upgrade for other features as it doesn't work
> > now
> > > either.
> > >
> >
> > rohit.ya...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
>
>
> --
> Rafael Weingärtner
>



-- 
Daan


Re: [DISCUSS] Freezing master for 4.11

2018-01-12 Thread Rafael Weingärtner
I believe there is no problem in merging Wido’s and Mike’s PRs, they have
been extensively discussed and improved (specially Mike’s one).

I noticed the merge of #2152 today morning, but crying over spilled milk
does not help anything… The code seems to be ok, maybe those variable names
`cmd` and `cmd2` could benefit from something more descriptive; also, the
magic number `1024`.

Having said that; I would be ok with it (no need to revert it), but we need
to be more careful with these things. If one wants to merge something,
there is no harm in waiting and calling for reviewers via Github, Slack, or
even email them directly.

On Fri, Jan 12, 2018 at 3:57 PM, Rohit Yadav 
wrote:

> All,
>
>
> We're down to one feature PR towards 4.11 milestone now:
>
> https://github.com/apache/cloudstack/pull/2298
>
>
> The config drive PR from Frank (Nuage) has been accepted today after no
> regression test failures seen from yesterday's smoketest run. We've also
> tested, reviewed and merge Wido's (blocker fix) PR.
>
>
> I've asked Mike to stabilize the branch; based on the smoketest results
> from today we can see some failures caused by the PR. I'm willing to work
> with Mike and others to get this PR tested, and merged over the weekends if
> we can demonstrate that no regression is caused by it, i.e. no new
> smoketest regressions. I'll also try to fix regression and test failures
> over the weekend.
>
>
> Lastly, I would like to discuss a mistake I made today with merging the
> following PR which per our guideline lacks one code review lgtm/approval:
>
> https://github.com/apache/cloudstack/pull/2152
>
>
> The changes in above (merged) PR are all localized to a xenserver-swift
> file, that is not tested by Travis or Trillian, since no new regression
> failures were seen I accepted and merge it on that discretion. The PR was
> originally on the 4.11 milestone, however, due to it lacking a JIRA id and
> no response from the author it was only recently removed from the milestone.
>
>
> Please advise if I need to revert this, or we can review/lgtm it
> post-merge? I'll also ping on the above PR.
>
>
> - Rohit
>
> 
>
>
>
> 
> From: Wido den Hollander 
> Sent: Thursday, January 11, 2018 9:17:26 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] Freezing master for 4.11
>
>
>
> On 01/10/2018 07:26 PM, Daan Hoogland wrote:
> > I hope we understand each other correctly: No-one running an earlier
> > version then 4.11 should miss out on any functionality they are using
> now.
> >
> > So if you use ipv6 and multiple cidrs now it must continue to work with
> no
> > loss of functionality. see my question below.
> >
> > On Wed, Jan 10, 2018 at 7:06 PM, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > wrote:
> >
> >> Daan, yes this sounds reasonable, I suppose who would like to fix, could
> >> do custom build for himself...
> >>
> >> But still it should be aknowledged somehow, if you use several cidrs for
> >> network, don't use v6, or don't upgrade to 4.11 because things will stop
> >> running well.
> >>
> > ​Does this mean that several cidrs in ipv6 works in 4.9 and not in 4.11?
> >
>
> No, it doesn't. IPv6 was introduced in 4.10 and this broke in 4.10.
>
> You can't run with 4.10 with multiple IPv4 CIDRs as well when you have
> IPv6 enabled.
>
> So this is broken in 4.10 and 4.11 in that case.
>
> Wido
>
> >
> > if yes; it is a blocker
> >
> > if no; you might as well upgrade for other features as it doesn't work
> now
> > either.
> >
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 
Rafael Weingärtner


Re: [DISCUSS] Freezing master for 4.11

2018-01-12 Thread Rohit Yadav
All,


We're down to one feature PR towards 4.11 milestone now:

https://github.com/apache/cloudstack/pull/2298


The config drive PR from Frank (Nuage) has been accepted today after no 
regression test failures seen from yesterday's smoketest run. We've also 
tested, reviewed and merge Wido's (blocker fix) PR.


I've asked Mike to stabilize the branch; based on the smoketest results from 
today we can see some failures caused by the PR. I'm willing to work with Mike 
and others to get this PR tested, and merged over the weekends if we can 
demonstrate that no regression is caused by it, i.e. no new smoketest 
regressions. I'll also try to fix regression and test failures over the weekend.


Lastly, I would like to discuss a mistake I made today with merging the 
following PR which per our guideline lacks one code review lgtm/approval:

https://github.com/apache/cloudstack/pull/2152


The changes in above (merged) PR are all localized to a xenserver-swift file, 
that is not tested by Travis or Trillian, since no new regression failures were 
seen I accepted and merge it on that discretion. The PR was originally on the 
4.11 milestone, however, due to it lacking a JIRA id and no response from the 
author it was only recently removed from the milestone.


Please advise if I need to revert this, or we can review/lgtm it post-merge? 
I'll also ping on the above PR.


- Rohit






From: Wido den Hollander 
Sent: Thursday, January 11, 2018 9:17:26 PM
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS] Freezing master for 4.11



On 01/10/2018 07:26 PM, Daan Hoogland wrote:
> I hope we understand each other correctly: No-one running an earlier
> version then 4.11 should miss out on any functionality they are using now.
>
> So if you use ipv6 and multiple cidrs now it must continue to work with no
> loss of functionality. see my question below.
>
> On Wed, Jan 10, 2018 at 7:06 PM, Ivan Kudryavtsev 
> wrote:
>
>> Daan, yes this sounds reasonable, I suppose who would like to fix, could
>> do custom build for himself...
>>
>> But still it should be aknowledged somehow, if you use several cidrs for
>> network, don't use v6, or don't upgrade to 4.11 because things will stop
>> running well.
>>
> ​Does this mean that several cidrs in ipv6 works in 4.9 and not in 4.11?
>

No, it doesn't. IPv6 was introduced in 4.10 and this broke in 4.10.

You can't run with 4.10 with multiple IPv4 CIDRs as well when you have
IPv6 enabled.

So this is broken in 4.10 and 4.11 in that case.

Wido

>
> if yes; it is a blocker
>
> if no; you might as well upgrade for other features as it doesn't work now
> either.
>

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Tim Mackey
dom0 already has a DHCP server listening for requests on internal
management networks. I'd be wary trying to manage it from an external
service like cloudstack lest it get reset upon XenServer patch. This alone
makes me favor option #2. I also think option #2 simplifies network design
for users.

Agreed on making this as consistent across flows as possible.



On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> It looks reasonable to manage VRs via management IP network. We should
> focus on using the same work flow for different deployment scenarios.
>
>
> On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion 
> wrote:
>
> > Hi,
> >
> > We need to start a architecture discussion about running SystemVM and
> > Virtual-Router as HVM instances in XenServer. With recent
> Meltdown-Spectre,
> > one of the mitigation step is currently to run VMs as HVM on XenServer to
> > self contain a user space attack from a guest OS.
> >
> > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start
> has
> > HVM. This is currently problematic for Virtual Routers and SystemVM
> because
> > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > cloud_link_local. While using HVM the "OS boot Options" is not accessible
> > to the VM so the VR fail to be properly configured.
> >
> > I currently see 2 potential approaches for this:
> > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> receive
> > is network configuration at boot.
> > 2. Change the current way of managing VR, SVMs on XenServer, potentiall
> do
> > same has with VMware: use pod management networks and assign a POD IP to
> > each VR.
> >
> > I don't know how it's implemented in KVM, maybe cloning KVM approach
> would
> > work too, could someone explain how it work on this thread?
> >
> > I'd a bit fan of a potential #2 aproach because it could facilitate VR
> > monitoring and logging, although a migration path for an existing cloud
> > could be complex.
> >
> > Cheers,
> >
> >
> > Pierre-Luc
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Rafael Weingärtner
It looks reasonable to manage VRs via management IP network. We should
focus on using the same work flow for different deployment scenarios.


On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion 
wrote:

> Hi,
>
> We need to start a architecture discussion about running SystemVM and
> Virtual-Router as HVM instances in XenServer. With recent Meltdown-Spectre,
> one of the mitigation step is currently to run VMs as HVM on XenServer to
> self contain a user space attack from a guest OS.
>
> Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start has
> HVM. This is currently problematic for Virtual Routers and SystemVM because
> CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> cloud_link_local. While using HVM the "OS boot Options" is not accessible
> to the VM so the VR fail to be properly configured.
>
> I currently see 2 potential approaches for this:
> 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would receive
> is network configuration at boot.
> 2. Change the current way of managing VR, SVMs on XenServer, potentiall do
> same has with VMware: use pod management networks and assign a POD IP to
> each VR.
>
> I don't know how it's implemented in KVM, maybe cloning KVM approach would
> work too, could someone explain how it work on this thread?
>
> I'd a bit fan of a potential #2 aproach because it could facilitate VR
> monitoring and logging, although a migration path for an existing cloud
> could be complex.
>
> Cheers,
>
>
> Pierre-Luc
>



-- 
Rafael Weingärtner


[DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Pierre-Luc Dion
Hi,

We need to start a architecture discussion about running SystemVM and
Virtual-Router as HVM instances in XenServer. With recent Meltdown-Spectre,
one of the mitigation step is currently to run VMs as HVM on XenServer to
self contain a user space attack from a guest OS.

Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start has
HVM. This is currently problematic for Virtual Routers and SystemVM because
CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
cloud_link_local. While using HVM the "OS boot Options" is not accessible
to the VM so the VR fail to be properly configured.

I currently see 2 potential approaches for this:
1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would receive
is network configuration at boot.
2. Change the current way of managing VR, SVMs on XenServer, potentiall do
same has with VMware: use pod management networks and assign a POD IP to
each VR.

I don't know how it's implemented in KVM, maybe cloning KVM approach would
work too, could someone explain how it work on this thread?

I'd a bit fan of a potential #2 aproach because it could facilitate VR
monitoring and logging, although a migration path for an existing cloud
could be complex.

Cheers,


Pierre-Luc


Re: [PROPOSE] EOL for supported OSes & Hypervisors

2018-01-12 Thread Rohit Yadav
+1 I've updated the page with upcoming Ubuntu 18.04 LTS.


After 4.11, I think 4.12 (assuming releases by mid of 2018) should remove 
"declared" (they might still work with 4.12+ but in docs and by project we 
should officially support them) support for following:


a. Hypervisor:

XenServer - 6.2, 6.5,

KVM - CentOS6, RHEL6, Ubuntu12.04 (I think this is already removed, packages 
don't work I think?)

vSphere/Vmware - 4.x, 5.0, 5.1, 5.5


b. Remove packaging for CentOS6.x, RHEL 6.x (the el6 packages), and Ubuntu 
12.04 (any non-systemd debian distro).


Thoughts, comments?


- Rohit






From: Paul Angus 
Sent: Thursday, January 11, 2018 10:41:08 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [PROPOSE] EOL for supported OSes & Hypervisors

I've cross-posted this as it ultimately effects users more than developers.

I've created a wiki page with the EOL dates from the respective 'vendors' of 
our main supported hypervisors and mgmt. server OSes.
I've taken End Of Life to be the end of 'mainstream' support i.e. the point at 
which updates to packages will no longer be available.  And part of the 
discussion should be whether this EOL date should be moved out to consider end 
of security patching instead.

https://cwiki.apache.org/confluence/display/CLOUDSTACK/WIP+-+UNOFFICIAL+-+PROPOSAL+-+EOL+Dates

I would like to propose, that as part of the release notes for the forthcoming 
4.11 release and as a general announcement, that we declare that:


  *   For any OSes/Hypervisors that are already EOL - we will no longer 
test/support them from the first release after June 2018 (6 months from now). 
And they will be removed from codebase (mainly the database) in the first 
release after Sept 2018 (9 months from now).
  *   We set End Of Support dates and Removal from Code dates for the remaining 
OSes/Hypervisors.  I propose that End Of Support should be the first release 
after EOL from the vendor, with code removal taking place in the first release 
which occurs after 6 months from 'vendor' EOL date.

Thoughts please


Kind regards,

Paul Angus


paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: [PROPOSE] EOL for supported OSes & Hypervisors

2018-01-12 Thread Pierre-Luc Dion
+1!

Do you think it would be the right page to also have debian version used by
the ssvm?

For the management-server section the cloudstack column would list the last
acs version tested on that OS?


Le 11 janv. 2018 12 h 53, "Will Stevens"  a écrit :

> I like this initiative.  I think this would be valuable to set an
> expectation around supportability.
>
> Cheers,
>
> *Will Stevens*
> CTO
>
> 
>
> On Thu, Jan 11, 2018 at 12:11 PM, Paul Angus 
> wrote:
>
> > I've cross-posted this as it ultimately effects users more than
> developers.
> >
> > I've created a wiki page with the EOL dates from the respective 'vendors'
> > of our main supported hypervisors and mgmt. server OSes.
> > I've taken End Of Life to be the end of 'mainstream' support i.e. the
> > point at which updates to packages will no longer be available.  And part
> > of the discussion should be whether this EOL date should be moved out to
> > consider end of security patching instead.
> >
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/
> > WIP+-+UNOFFICIAL+-+PROPOSAL+-+EOL+Dates
> >
> > I would like to propose, that as part of the release notes for the
> > forthcoming 4.11 release and as a general announcement, that we declare
> > that:
> >
> >
> >   *   For any OSes/Hypervisors that are already EOL - we will no longer
> > test/support them from the first release after June 2018 (6 months from
> > now). And they will be removed from codebase (mainly the database) in the
> > first release after Sept 2018 (9 months from now).
> >   *   We set End Of Support dates and Removal from Code dates for the
> > remaining OSes/Hypervisors.  I propose that End Of Support should be the
> > first release after EOL from the vendor, with code removal taking place
> in
> > the first release which occurs after 6 months from 'vendor' EOL date.
> >
> > Thoughts please
> >
> >
> > Kind regards,
> >
> > Paul Angus
> >
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
>


Re: Which StringUtils to use?

2018-01-12 Thread Rafael Weingärtner
Well, there is always other approaches...If we did not use those static
loggers, this number could be greatly reduced. Most of those objects are
singletons and we could use a protected attribute in the first element of
the hierarchy.

I do not mind a PR with this number of files changes as long as you stick
to a single change, what I mind is the combination of high number of files
and commits.Then, at least for me, it becomes pretty hard to track down
things.

On Fri, Jan 12, 2018 at 6:19 AM, Daan Hoogland 
wrote:

> if we don't use a wrapper we get PRs like
> https://github.com/apache/cloudstack/pull/2276 in the future, trying to
> update logging touches 1710 files. I think we should go for the wrapper
> model on these kind of utilities.
>
> On Thu, Jan 11, 2018 at 9:59 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> > Wrapping would still hold code on our side. We have to get rid of code…
> >
> > If we want to start removing CloudStack’s StringUtils in favor of
> > StringUtils from Apache, we could start creating PRs by components (java
> > project in Eclipse). That is manageable to do and to review. There are
> > about 119 classes that use CloudStack’s StringUtils.
> >
> >
> > We will not be able to remove CloudStack's StringUtils though. There are
> > very specific things there such as "applyPagination" that should not even
> > be there... I guess the programmer was running out of places to write
> code
> >
> > On Thu, Jan 11, 2018 at 6:25 PM, Daan Hoogland 
> > wrote:
> >
> > > All, I am having second thoughts. I think we should maintain a wrapper
> > for
> > > string utils and pass through as much as possible to commons string
> > utils.
> > > A similar thing is applicable to logging. It was started at one time
> and
> > a
> > > second attempt was started to use slf4j.
> > > I think we should encapsulate these kind of utilities to facilitate
> > > migration.
> > > There is also json and xml formatting and maybe handling sockets and
> (big
> > > one) data access objects :/
> > >
> > > @Ron, all string utils are static methods.
> > >
> > > On Thu, Jan 11, 2018 at 12:11 AM, Ron Wheeler
> >  > > com> wrote:
> > >
> > > > Certainly better to find the references and remove them if you can
> get
> > > > that done in a single effort.
> > > >
> > > > Just a technical question: Could one not just add the Warning to the
> > > > constructor?
> > > > Might have to create a null (log warning only) constructor.
> > > >
> > > > Ron
> > > >
> > > >
> > > > On 10/01/2018 3:58 PM, Daan Hoogland wrote:
> > > >
> > > >> We can add log messages to each of the methods in StringUtils but I
> do
> > > not
> > > >> think that is a good way to go. Any method you touch you can reform
> or
> > > >> remove anyhow.
> > > >>
> > > >> On Wed, Jan 10, 2018 at 9:51 PM, Ron Wheeler <
> > > >> rwhee...@artifact-software.com
> > > >>
> > > >>> wrote:
> > > >>> Agreed about deprecation.
> > > >>> A logged WARNing would be detected during testing as well as at
> > > run-time.
> > > >>>
> > > >>> Ron
> > > >>>
> > > >>> On 10/01/2018 3:34 PM, Daan Hoogland wrote:
> > > >>>
> > > >>> Ron, we could but that would only log during compile-time, not on
> > > >>> runtime.
> > > >>> I am doing some analysis and commenting in Wido's ticket.
> > > >>>
> > > >>> On Wed, Jan 10, 2018 at 9:23 PM, Ron Wheeler
> > >  > > >>> com> wrote:
> > > >>>
> > > >>> Is it possible to mark it as deprecated and have it log a warning
> > when
> > >  used?
> > > 
> > >  Ron
> > > 
> > > 
> > >  On 10/01/2018 2:26 PM, Daan Hoogland wrote:
> > > 
> > >  I think we could start with giving it an explicit non standard
> name
> > > like
> > > > CloudStackLocalStringUtils or something a little shorter. Making
> > sure
> > > > that
> > > > we prefer for these types of utils to be imported from other
> > > projects.
> > > >
> > > > On Wed, Jan 10, 2018 at 4:26 PM, Wido den Hollander <
> > w...@widodh.nl>
> > > > wrote:
> > > >
> > > >
> > > > On 01/10/2018 01:09 PM, Rafael Weingärtner wrote:
> > > >>
> > > >> Instead of creating a PR for that, we could do the bit by bit
> job
> > > >>
> > > >>> (hopefully one day we finish the job).
> > > >>> Every time we see a code using ACS's StringUtils, we check if
> it
> > > can
> > > >>> be
> > > >>> replaced by Apache's one.
> > > >>>
> > > >>>
> > > >>> Yes, but that will slip from peoples attention and we will
> > probably
> > > >>> see
> > > >>>
> > > >> cases where people still use the old one by accident.
> > > >>
> > > >> I've created a issue: https://issues.apache.org/jira
> > > >> /browse/CLOUDSTACK-10225
> > > >>
> > > >> I also started on some low hanging fruit as some methods in
> > > >> StringUtils
> > > >> are not used or are very easy to replace.
> 

Re: Which StringUtils to use?

2018-01-12 Thread Daan Hoogland
if we don't use a wrapper we get PRs like
https://github.com/apache/cloudstack/pull/2276 in the future, trying to
update logging touches 1710 files. I think we should go for the wrapper
model on these kind of utilities.

On Thu, Jan 11, 2018 at 9:59 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Wrapping would still hold code on our side. We have to get rid of code…
>
> If we want to start removing CloudStack’s StringUtils in favor of
> StringUtils from Apache, we could start creating PRs by components (java
> project in Eclipse). That is manageable to do and to review. There are
> about 119 classes that use CloudStack’s StringUtils.
>
>
> We will not be able to remove CloudStack's StringUtils though. There are
> very specific things there such as "applyPagination" that should not even
> be there... I guess the programmer was running out of places to write code
>
> On Thu, Jan 11, 2018 at 6:25 PM, Daan Hoogland 
> wrote:
>
> > All, I am having second thoughts. I think we should maintain a wrapper
> for
> > string utils and pass through as much as possible to commons string
> utils.
> > A similar thing is applicable to logging. It was started at one time and
> a
> > second attempt was started to use slf4j.
> > I think we should encapsulate these kind of utilities to facilitate
> > migration.
> > There is also json and xml formatting and maybe handling sockets and (big
> > one) data access objects :/
> >
> > @Ron, all string utils are static methods.
> >
> > On Thu, Jan 11, 2018 at 12:11 AM, Ron Wheeler
>  > com> wrote:
> >
> > > Certainly better to find the references and remove them if you can get
> > > that done in a single effort.
> > >
> > > Just a technical question: Could one not just add the Warning to the
> > > constructor?
> > > Might have to create a null (log warning only) constructor.
> > >
> > > Ron
> > >
> > >
> > > On 10/01/2018 3:58 PM, Daan Hoogland wrote:
> > >
> > >> We can add log messages to each of the methods in StringUtils but I do
> > not
> > >> think that is a good way to go. Any method you touch you can reform or
> > >> remove anyhow.
> > >>
> > >> On Wed, Jan 10, 2018 at 9:51 PM, Ron Wheeler <
> > >> rwhee...@artifact-software.com
> > >>
> > >>> wrote:
> > >>> Agreed about deprecation.
> > >>> A logged WARNing would be detected during testing as well as at
> > run-time.
> > >>>
> > >>> Ron
> > >>>
> > >>> On 10/01/2018 3:34 PM, Daan Hoogland wrote:
> > >>>
> > >>> Ron, we could but that would only log during compile-time, not on
> > >>> runtime.
> > >>> I am doing some analysis and commenting in Wido's ticket.
> > >>>
> > >>> On Wed, Jan 10, 2018 at 9:23 PM, Ron Wheeler
> >  > >>> com> wrote:
> > >>>
> > >>> Is it possible to mark it as deprecated and have it log a warning
> when
> >  used?
> > 
> >  Ron
> > 
> > 
> >  On 10/01/2018 2:26 PM, Daan Hoogland wrote:
> > 
> >  I think we could start with giving it an explicit non standard name
> > like
> > > CloudStackLocalStringUtils or something a little shorter. Making
> sure
> > > that
> > > we prefer for these types of utils to be imported from other
> > projects.
> > >
> > > On Wed, Jan 10, 2018 at 4:26 PM, Wido den Hollander <
> w...@widodh.nl>
> > > wrote:
> > >
> > >
> > > On 01/10/2018 01:09 PM, Rafael Weingärtner wrote:
> > >>
> > >> Instead of creating a PR for that, we could do the bit by bit job
> > >>
> > >>> (hopefully one day we finish the job).
> > >>> Every time we see a code using ACS's StringUtils, we check if it
> > can
> > >>> be
> > >>> replaced by Apache's one.
> > >>>
> > >>>
> > >>> Yes, but that will slip from peoples attention and we will
> probably
> > >>> see
> > >>>
> > >> cases where people still use the old one by accident.
> > >>
> > >> I've created a issue: https://issues.apache.org/jira
> > >> /browse/CLOUDSTACK-10225
> > >>
> > >> I also started on some low hanging fruit as some methods in
> > >> StringUtils
> > >> are not used or are very easy to replace.
> > >>
> > >>
> > >> Wido
> > >>
> > >> On Wed, Jan 10, 2018 at 10:01 AM, Wido den Hollander <
> > w...@widodh.nl>
> > >>
> > >> wrote:
> > >>>
> > >>>
> > >>> On 01/10/2018 12:01 PM, Daan Hoogland wrote:
> > >>>
> >  I'd say remove as much functionality as we can from 'our'
> >  StringUtils
> >  and
> > 
> >  phase them out asap.
> > >
> > >
> > > Yes, but such a PR would be invasive and would be difficult to
> > > merge
> > > and
> > >
> > > also break a lot of other code.
> > 
> >  It's not easy since it will touch a lot, but I mean, a lot of
> > files.
> > 
> >  Our StringUtils was a very good solution, but the Apache one is
>