Re: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig.

2018-04-23 Thread Dan Sneddon
e.
> >
> > I don't think adding them directly to the Server resource makes a ton
> > of sense though.
> >
> In tripleo, the ctlplane interface is an implicit port created by the
> server resource. :( (Attempts where made to change this, but upgrades
> would'nt work) So the server resource is where I would find it most
> useful. (Adding attributes to the port resource, and then using
> external resource for the implicit server ports may be a compromise.
> Nested dependencies for external_resources might be hard?)
>

Yes, the port is currently created as part of the Ironic server resource.
We would have more flexibility if this were a separate Neutron port, but we
need to be able to support upgrades. This would require the ability in Heat
to detach the implicit port from the Ironic resource, and attach a Neutron
port resource with the same IP to a node without rebuilding the entire
node. This isn't currently possible.


>
> > > c) Create a new heat function to read properties of any
> > >openstack resource, without having to make use of the
> > >external_resource in heat.
> >
> > It's an interesting idea, but I think it would look a lot like what
> > external resources are supposed to be.
> >
> > I see a few changes:
> >  * Allow external resource to depend on other resources
> >  * Expose more port attributes
> >  * Expose more subnet attributes
> >
> > If you can list the attributes you care about that'd be great.
> >
>
> Guess what I envision is a client_config attribute, a map with data
> useful to configure a network interface on the client. (I put * on the
> ones I believe could be useful for TripleO)
>
> * /v2.0/networks/{network_id}/mtu
> /v2.0/networks/{network_id}/dns_domain
> * /v2.0/subnets/{subnet_id}/dns_nameservers
> * /v2.0/subnets/{subnet_id}/host_routes
> /v2.0/subnets/{subnet_id}/ip_version
> * /v2.0/subnets/{subnet_id}/gateway_ip
> * /v2.0/subnets/{subnet_id}/cidr
> * /v2.0/subnets/{subnet_id}/ipv6_address_mode
> * /v2.0/subnets/{subnet_id}/ipv6_ra_mode
> /v2.0/ports/{port_id}/description - Why not?
> /v2.0/ports/{port_id}/dns_assignment
> /v2.0/ports/{port_id}/dns_domain
> /v2.0/ports/{port_id}/dns_name
> * /v2.0/ports/{port_id}/fixed_ips - We have this already
> /v2.0/ports/{port_id}/name- Why not?
>
>
> I've added Dan Sneddon on CC as well. Guess there is the question if
> TripleO will/want to continue using heat, neutron, nova etc.
>
>
>
> //
> Harald
>

I can't speak to the roadmap of Heat/Neutron/Nova on the undercloud, for
the immediate future I don't see us moving away from Heat entirely due to
upgrade requirements.

I can see another use case for this Heat functionality, which is that I
would like to be able to generate a report using Heat that lists all the
ports in use in the entire deployment. This would be generated
post-deployment, and could be used to populate an external DNS server, or
simply to report on which IPs belong to which nodes.

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][ui] Network Configuration wizard

2018-02-22 Thread Dan Sneddon
config templates in deployment plan and reference them from
>>> network-environment.yaml
>>>
>>> Problems to solve:
>>> As a biggest problem to solve I see defining logic which would
>>> automatically handle assigning parameters to elements in network_config
>>> based on Network which user assigns to the element. For example: Using GUI,
>>> user is creating network_config for compute role based on
>>> network/config/multiple-nics/compute.yaml, user adds an interface and
>>> assigns the interface to Tenant network. Resulting template should then
>>> automatically populate addresses/ip_netmask: get_param: TenantIpSubnet.
>>> Question is whether all this logic should live in GUI or should GUI pass
>>> simplified format to Mistral workflow which will convert it to proper
>>> network_config format and populates the template with it.
>>>
>>
>> I guess the fact that I separated the UI and config generation code in my
>> tool is my answer to this question.  I don't remember all of my reasons for
>> that design, but I think the main thing was to keep the input and
>> generation cleanly separated.  Otherwise there was a danger of making a UI
>> change and having it break the generation process because they were tightly
>> coupled.  Having a JSON interface between the two avoids a lot of those
>> problems.  It also made it fairly easy to unit test the generation code,
>> whereas trying to mock out all of the UI elements would have been a fragile
>> nightmare.
>>
>> It does require a bunch of translation code[1], but a lot of it is fairly
>> boilerplate (just map UI inputs to JSON keys).
>>
>> 1: https://github.com/cybertron/tripleo-scripts/blob/171aedabfe
>> ad1f27f4dc0fce41a8b82da28923ed/net-iso-gen.py#L515
>>
>> Hope this helps.
>
>
> Ben, thanks a lot for your input. I think this makes the direction with
> NIC configs clearer:
>
> 1. The generated template will include all possible parameters definitions
> unless we find a suitable way of populating parameters section part of
> template generation process. Note that current jinja templates for NIC
> config (e.g. network/config/multiple-nics/role.role.j2.yaml:127) create
> these definitions conditionally by specific role name which is not very
> elegant in terms of custom roles.
>

This patch recently landed, which generates all the needed parameters in
the sample NIC configs based on the composable networks defined in
network_data.yaml:
https://review.openstack.org/#/c/523638

Furthermore, this patch removes all the role-specific hard-coded templates,
and generates templates based on the role-to-network association in
roles_data.yaml.

I think we could use this method to generate the needed parameters for the
templates generated in the UI. I would personally like to see a workflow
where the user chose one of the built-in NIC config designs to generate
samples, which could then be further edited. Presenting a blank slate to
the user, and requiring them to build up the hierarchy is very confusing
unless the installer is very familiar with the desired architecture (first
add a bridge, then add a bond to the bridge, then add interfaces to the
bond, then add VLANs to the bridge). It's better to start with a basic
example (VLANs on a single NIC, one NIC per network, DPDK, etc.), and allow
the user to customize from there.


>
> 2. GUI is going to define forms to add/configure network elements
> (interface, bridge, bond, vlan, ...) and provide user friendly way to
> combine these together. The whole data construct (per Role) is going to be
> sent to tripleo-common workflow as json. Workflow consumes json input and
> produces final template yaml. I think we should be able to reuse bunch of
> the logic which Ben already created.
>
> Example:
> json input from GUI:
> ..., {
>   type: 'interface',
>   name: 'nic1',
>   network_name_lower: 'external'
> },...
> transformed by tripleo-common:
> ...
> - type: interface
>   name: nic{{loop.index + 1}}
>   use_dhcp: false
>   addresses:
>   - ip_netmask:
>   get_param: {{network.name}}IpSubnet
> ...
>
> With this approach, we'll create common API provided by Mistral to
> generate NIC config templates which can be reused by CLI and other clients,
> not TripleO UI specifically. Note that we will also need a 'reverse'
> Mistral workflow which is going to convert template yaml network_config
> into the input json format, so GUI can display current configuration to the
> user and let him change that.
>
> Liz has updated network configuration

Re: [openstack-dev] [tripleo] Nominate akrivoka for tripleo-validations core

2017-11-07 Thread Dan Sneddon
On Mon, Nov 6, 2017 at 6:32 AM, Honza Pokorny  wrote:

> Hello people,
>
> I would like to nominate Ana Krivokapić (akrivoka) for the core team for
> tripleo-validations.  She has really stepped up her game on that project
> in terms of helpful reviews, and great patches.
>
> With Ana's help as a core, we can get more done, and innovate faster.
>
> If there are no objections within a week, we'll proceed with adding Ana
> to the team.
>
> Thanks
>
> Honza Pokorny
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+1, glad to hear it.

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Conditionally passing properties in Heat

2017-04-20 Thread Dan Sneddon
On 04/20/2017 12:37 AM, Steven Hardy wrote:
> On Wed, Apr 19, 2017 at 02:51:28PM -0700, Dan Sneddon wrote:
>> On 04/13/2017 12:01 AM, Rabi Mishra wrote:
>>> On Thu, Apr 13, 2017 at 2:14 AM, Dan Sneddon >> <mailto:dsned...@redhat.com>> wrote:
>>>
>>> On 04/12/2017 01:22 PM, Thomas Herve wrote:
>>> > On Wed, Apr 12, 2017 at 9:00 PM, Dan Sneddon >> <mailto:dsned...@redhat.com>> wrote:
>>> >> I'm implementing predictable control plane IPs for spine/leaf,
>>> and I'm
>>> >> running into a problem implementing this in the TripleO Heat
>>> templates.
>>> >>
>>> >> I have a review in progress [1] that works, but fails on upgrade,
>>> so I'm
>>> >> looking for an alternative approach. I'm trying to influence the IP
>>> >> address that is selected for overcloud nodes' Control Plane IP.
>>> Here is
>>> >> the current construct:
>>> >>
>>> >>   Controller:
>>> >> type: OS::TripleO::Server
>>> >> metadata:
>>> >>   os-collect-config:
>>> >> command: {get_param: ConfigCommand}
>>> >> properties:
>>> >>   image: {get_param: controllerImage}
>>> >>   image_update_policy: {get_param: ImageUpdatePolicy}
>>> >>   flavor: {get_param: OvercloudControlFlavor}
>>> >>   key_name: {get_param: KeyName}
>>> >>   networks:
>>> >> - network: ctlplane  # <- Here's where the port is created
>>> >>
>>> >> If I add fixed_ip: to the networks element at the end of the above, I
>>> >> can select an IP address from the 'ctlplane' network, like this:
>>> >>
>>> >>   networks:
>>> >> - network: ctlplane
>>> >>   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
>>> >>
>>> >> But the problem is that if I pass a blank string to fixed_ip, I
>>> get an
>>> >> error on deployment. This means that the old behavior of
>>> automatically
>>> >> selecting an IP doesn't work.
>>> >>
>>> >> I thought I has solved this by passing an external Neutron port,
>>> like this:
>>> >>
>>> >>   networks:
>>> >> - network: ctlplane
>>> >>   port: {get_attr: [ControlPlanePort, port_id]}
>>> >>
>>> >> Which works for deployments, but that fails on upgrades, since the
>>> >> original port was created as part of the Nova::Server resource,
>>> instead
>>> >> of being an external resource.
>>> >
>>> > Can you detail how it fails? I was under the impression we never
>>> > replaced servers no matter what (or we try to do that, at least). Is
>>> > the issue that your new port is not the correct one?
>>> >
>>> >> I'm now looking for a way to use Heat conditionals to apply the
>>> fixed_ip
>>> >> only if the value is not unset. Looking at the intrinsic
>>> functions [2],
>>> >> I don't see a way to do this. Is what I'm trying to do with Heat
>>> possible?
>>> >
>>> > You should be able to write something like that (not tested):
>>> >
>>> > networks:
>>> >   if:
>>> > - 
>>> > - network: ctlplane
>>> >   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
>>> > - network: ctlplane
>>> >
>>> > The question is how to define your condition. Maybe:
>>> >
>>> > conditions:
>>> >   fixed_ip_condition:
>>> >  not:
>>> > equals:
>>> >   - {get_attr: [ControlPlanePort, ip_address]}
>>> >   - ''
>>> >
>>> > To get back to the problem you stated first.
>>> >
>>> >
>>> >> Another option I'm exploring is conditionally applying resources. It
>>> >> appears that would r

Re: [openstack-dev] [TripleO][Heat] Conditionally passing properties in Heat

2017-04-19 Thread Dan Sneddon
On 04/13/2017 12:01 AM, Rabi Mishra wrote:
> On Thu, Apr 13, 2017 at 2:14 AM, Dan Sneddon  <mailto:dsned...@redhat.com>> wrote:
> 
> On 04/12/2017 01:22 PM, Thomas Herve wrote:
> > On Wed, Apr 12, 2017 at 9:00 PM, Dan Sneddon  <mailto:dsned...@redhat.com>> wrote:
> >> I'm implementing predictable control plane IPs for spine/leaf,
> and I'm
> >> running into a problem implementing this in the TripleO Heat
> templates.
> >>
> >> I have a review in progress [1] that works, but fails on upgrade,
> so I'm
> >> looking for an alternative approach. I'm trying to influence the IP
> >> address that is selected for overcloud nodes' Control Plane IP.
> Here is
> >> the current construct:
> >>
> >>   Controller:
> >> type: OS::TripleO::Server
> >> metadata:
> >>   os-collect-config:
> >> command: {get_param: ConfigCommand}
> >> properties:
> >>   image: {get_param: controllerImage}
> >>   image_update_policy: {get_param: ImageUpdatePolicy}
> >>   flavor: {get_param: OvercloudControlFlavor}
> >>   key_name: {get_param: KeyName}
> >>   networks:
> >> - network: ctlplane  # <- Here's where the port is created
> >>
> >> If I add fixed_ip: to the networks element at the end of the above, I
> >> can select an IP address from the 'ctlplane' network, like this:
> >>
> >>   networks:
> >> - network: ctlplane
> >>   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
> >>
> >> But the problem is that if I pass a blank string to fixed_ip, I
> get an
> >> error on deployment. This means that the old behavior of
> automatically
> >> selecting an IP doesn't work.
> >>
> >> I thought I has solved this by passing an external Neutron port,
> like this:
> >>
> >>   networks:
> >> - network: ctlplane
> >>   port: {get_attr: [ControlPlanePort, port_id]}
> >>
> >> Which works for deployments, but that fails on upgrades, since the
> >> original port was created as part of the Nova::Server resource,
> instead
> >> of being an external resource.
> >
> > Can you detail how it fails? I was under the impression we never
> > replaced servers no matter what (or we try to do that, at least). Is
> > the issue that your new port is not the correct one?
> >
> >> I'm now looking for a way to use Heat conditionals to apply the
> fixed_ip
> >> only if the value is not unset. Looking at the intrinsic
> functions [2],
> >> I don't see a way to do this. Is what I'm trying to do with Heat
> possible?
> >
> > You should be able to write something like that (not tested):
> >
> > networks:
> >   if:
> > - 
> > - network: ctlplane
> >   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
> > - network: ctlplane
> >
> > The question is how to define your condition. Maybe:
> >
> > conditions:
> >   fixed_ip_condition:
> >  not:
> > equals:
> >   - {get_attr: [ControlPlanePort, ip_address]}
> >   - ''
> >
> > To get back to the problem you stated first.
> >
> >
> >> Another option I'm exploring is conditionally applying resources. It
> >> appears that would require duplicating the entire TripleO::Server
> stanza
> >> in *-role.yaml so that there is one that uses fixed_ip and one
> that does
> >> not. Which one is applied would be based on a condition that tested
> >> whether fixed_ip was blank or not. The downside of that is that
> it would
> >> make the role definition confusing because there would be a large
> >> resource that was implemented twice, with only one line difference
> >> between them.
> >
> > You can define properties with conditions, so you shouldn't need to
> > rewrite everything.
> >
> 
> Thomas,
> 
> Thanks, I will try your suggestions and that should get me closer.
> 
> The full error log

Re: [openstack-dev] [TripleO][Heat] Conditionally passing properties in Heat

2017-04-12 Thread Dan Sneddon
On 04/12/2017 01:22 PM, Thomas Herve wrote:
> On Wed, Apr 12, 2017 at 9:00 PM, Dan Sneddon  wrote:
>> I'm implementing predictable control plane IPs for spine/leaf, and I'm
>> running into a problem implementing this in the TripleO Heat templates.
>>
>> I have a review in progress [1] that works, but fails on upgrade, so I'm
>> looking for an alternative approach. I'm trying to influence the IP
>> address that is selected for overcloud nodes' Control Plane IP. Here is
>> the current construct:
>>
>>   Controller:
>> type: OS::TripleO::Server
>> metadata:
>>   os-collect-config:
>> command: {get_param: ConfigCommand}
>> properties:
>>   image: {get_param: controllerImage}
>>   image_update_policy: {get_param: ImageUpdatePolicy}
>>   flavor: {get_param: OvercloudControlFlavor}
>>   key_name: {get_param: KeyName}
>>   networks:
>> - network: ctlplane  # <- Here's where the port is created
>>
>> If I add fixed_ip: to the networks element at the end of the above, I
>> can select an IP address from the 'ctlplane' network, like this:
>>
>>   networks:
>> - network: ctlplane
>>   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
>>
>> But the problem is that if I pass a blank string to fixed_ip, I get an
>> error on deployment. This means that the old behavior of automatically
>> selecting an IP doesn't work.
>>
>> I thought I has solved this by passing an external Neutron port, like this:
>>
>>   networks:
>> - network: ctlplane
>>   port: {get_attr: [ControlPlanePort, port_id]}
>>
>> Which works for deployments, but that fails on upgrades, since the
>> original port was created as part of the Nova::Server resource, instead
>> of being an external resource.
> 
> Can you detail how it fails? I was under the impression we never
> replaced servers no matter what (or we try to do that, at least). Is
> the issue that your new port is not the correct one?
> 
>> I'm now looking for a way to use Heat conditionals to apply the fixed_ip
>> only if the value is not unset. Looking at the intrinsic functions [2],
>> I don't see a way to do this. Is what I'm trying to do with Heat possible?
> 
> You should be able to write something like that (not tested):
> 
> networks:
>   if:
> - 
> - network: ctlplane
>   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
> - network: ctlplane
> 
> The question is how to define your condition. Maybe:
> 
> conditions:
>   fixed_ip_condition:
>  not:
> equals:
>   - {get_attr: [ControlPlanePort, ip_address]}
>   - ''
> 
> To get back to the problem you stated first.
> 
> 
>> Another option I'm exploring is conditionally applying resources. It
>> appears that would require duplicating the entire TripleO::Server stanza
>> in *-role.yaml so that there is one that uses fixed_ip and one that does
>> not. Which one is applied would be based on a condition that tested
>> whether fixed_ip was blank or not. The downside of that is that it would
>> make the role definition confusing because there would be a large
>> resource that was implemented twice, with only one line difference
>> between them.
> 
> You can define properties with conditions, so you shouldn't need to
> rewrite everything.
> 

Thomas,

Thanks, I will try your suggestions and that should get me closer.

The full error log is available here:
http://logs.openstack.org/78/413278/11/check-tripleo/gate-tripleo-ci-centos-7-ovb-updates/8d91762/console.html

Here are the errors I am getting:

2017-04-12 00:26:34.436655 | 2017-04-12 00:26:29Z
[overcloud-CephStorage-bkucn6ign34i-0-2yq2jbtwuu7k.CephStorage]:
UPDATE_FAILED  RetryError: resources.CephStorage: RetryError[]
2017-04-12 00:26:34.436808 | 2017-04-12 00:26:29Z
[overcloud-CephStorage-bkucn6ign34i-0-2yq2jbtwuu7k]: UPDATE_FAILED
RetryError: resources.CephStorage: RetryError[]
2017-04-12 00:26:34.436903 | 2017-04-12 00:26:29Z
[overcloud-CephStorage-bkucn6ign34i.0]: UPDATE_FAILED  resources[0]:
RetryError: resources.CephStorage: RetryError[]
2017-04-12 00:26:34.436989 | 2017-04-12 00:26:29Z
[overcloud-CephStorage-bkucn6ign34i]: UPDATE_FAILED  resources[0]:
RetryError: resources.CephStorage: RetryError[]
2017-04-12 00:26:34.437078 | 2017-04-12 00:26:30Z
[overcloud-Controller-3lf3jauv4cbc-0-ydowkb3nwsso.Controller]:
UPDATE_FAILED  RetryError: resources.Controller: RetryError[]
2017-04-12 00:26:34.437173 | 2017-04-12 00:26:30Z
[overcloud-Controller-3lf3jauv4cbc-0-ydowkb3n

[openstack-dev] [TripleO][Heat] Conditionally passing properties in Heat

2017-04-12 Thread Dan Sneddon
I'm implementing predictable control plane IPs for spine/leaf, and I'm
running into a problem implementing this in the TripleO Heat templates.

I have a review in progress [1] that works, but fails on upgrade, so I'm
looking for an alternative approach. I'm trying to influence the IP
address that is selected for overcloud nodes' Control Plane IP. Here is
the current construct:

  Controller:
type: OS::TripleO::Server
metadata:
  os-collect-config:
command: {get_param: ConfigCommand}
properties:
  image: {get_param: controllerImage}
  image_update_policy: {get_param: ImageUpdatePolicy}
  flavor: {get_param: OvercloudControlFlavor}
  key_name: {get_param: KeyName}
  networks:
- network: ctlplane  # <- Here's where the port is created

If I add fixed_ip: to the networks element at the end of the above, I
can select an IP address from the 'ctlplane' network, like this:

  networks:
- network: ctlplane
  fixed_ip: {get_attr: [ControlPlanePort, ip_address]}

But the problem is that if I pass a blank string to fixed_ip, I get an
error on deployment. This means that the old behavior of automatically
selecting an IP doesn't work.

I thought I has solved this by passing an external Neutron port, like this:

  networks:
- network: ctlplane
  port: {get_attr: [ControlPlanePort, port_id]}

Which works for deployments, but that fails on upgrades, since the
original port was created as part of the Nova::Server resource, instead
of being an external resource.

I'm now looking for a way to use Heat conditionals to apply the fixed_ip
only if the value is not unset. Looking at the intrinsic functions [2],
I don't see a way to do this. Is what I'm trying to do with Heat possible?

Another option I'm exploring is conditionally applying resources. It
appears that would require duplicating the entire TripleO::Server stanza
in *-role.yaml so that there is one that uses fixed_ip and one that does
not. Which one is applied would be based on a condition that tested
whether fixed_ip was blank or not. The downside of that is that it would
make the role definition confusing because there would be a large
resource that was implemented twice, with only one line difference
between them.

Does anyone have any ideas how to go about this?

[1] - https://review.openstack.org/#/c/413278/
[2] -
https://docs.openstack.org/developer/heat/template_guide/hot_spec.html#intrinsic-functions

-- 
Dan Sneddon |  Senior Principal Software Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] How to Preview the Overcloud Stack?

2017-03-27 Thread Dan Sneddon
I've been trying to figure out a workflow for previewing the results of
importing custom templates in an overcloud deployment (without actually
deploying). For instance, I am overriding some parameters using custom
templates, and I want to make sure those parameters will be expressed
correctly when I deploy.

I know about "heat stack-preview", but between the complexity of the
overcloud stack and the jinja2 template processing, I can't figure out a
way to preview the entire overcloud stack.

Is this possible? If not, any hints on what would it take to write a
script that would accomplish this?

-- 
Dan Sneddon |  Senior Principal Software Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Dan Sneddon
On 03/10/2017 08:26 AM, Heidi Joy Tretheway wrote:
> Hi TripleO team, 
> 
> Here’s an update on your project logo. Our illustrator tried to be as
> true as possible to your original, while ensuring it matched the line
> weight, color palette and style of the rest. We also worked to make sure
> that three Os in the logo are preserved. Thanks for your patience as we
> worked on this! Feel free to direct feedback to me.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

This is a huge improvement! Some of the previous drafts looked more like
a generic bird and less like an owl.

I have a suggestion on how this might be more owl-like. If you look at
real owl faces [1], you will see that their eyes are typically yellow,
and they often have a white circle around the eyes (black pupil, yellow
eye, black/white circle of feathers). I think that we could add a yellow
ring around the black pupil, and possibly accentuate the ears (since
owls often have white tufts on their ears).

I whipped up a quick example of what I'm talking about, it's attached
(hopefully it will survive the mailing list).

[1] - https://www.google.com/search?q=owl+face&tbm=isch&tbo=u&source=univ

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-09 Thread Dan Sneddon
Pete, thanks for mentioning network isolation and segmentation. That's
my area of interest, since I'm focused on underlay networking for
TripleO and bare-metal networking in Ironic.

Network isolation is going to be important for several reasons:

1) Separation of control and data plane in deployments
2) Tenant isolation in multi-tenant Ironic BMaaS
3) Network Function Virtualization (NFV) use cases

The intention of the isolated networking model for TripleO was to
separate control and data plane, as well as tenant from administrative
traffic. A secondary goal was to make this highly configurable and
customizable. This has been well received by many operators who have
rigid security isolation requirements (such as PCI-DSS for financial
transactions), or those who customize their underlay network to
integrate into an existing networking topology. I'm thinking about how
to do something similar in Kubernetes, perhaps with Kuryr.

The Harbor project looks very interesting. Do you have any more
information about how Harbor uses Raven to achieve isolation? Also, are
you saying that Harbor uses an older (prototype) version of Raven, or
are you referring to Raven itself as a prototype?

I'll be at the PTG Tuesday through Friday morning. I'm looking forward
to having some conversations about this topic.

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

On 02/09/2017 09:56 AM, Pete Birley wrote:
> Hi Flavio,
> 
> I've been doing some work on packaging Kuryr for use with K8s as an
> underlay for OpenStack on Kubernetes. When we met up in Brno the Harbor
> project I showed you used Tony's old Raven Prototype to provide the
> network isolation and segmentation in K8s. I've since begun to lay the
> groundwork for OpenStack-Helm to support similar modes of operation,
> allowing both service isolation and also combined networking between
> OpenStack and K8s, where pods and VMs can co-exist on the same Neutron
> Networks.
> 
> I'm not sure I will have things fully functional within OpenStack-Helm
> by the PTG, but it would be great to sit down and work out how we can
> ensure that not only do we not end up replicating work needlessly, but
> also find further opportunities to collaborate. I'll be in Atlanta all
> week, though I think some of the OS-Helm and Kolla-K8s developers will
> be leaving on Wed, would a particular day/time work best for you?
> 
> 
> Cheers
> 
> Pete (portdirect)
> 
> 
> On Thu, Feb 9, 2017 at 8:57 AM, Flavio Percoco  <mailto:fla...@redhat.com>> wrote:
> 
> Greetings,
> 
> I was talking with Tony and he mentioned that he's recording a new
> demo for
> kuryr and, well, it'd be great to also use the containerized version
> of TripleO
> for the demo.
> 
> His plan is to have this demo out by next week and that may be too
> tight for the
> containerized version of TripleO (it may be not, let's try). That
> said, I think
> it's still a good opportunity for us to sit down at the PTG and play
> with this a
> bit further.
> 
> So, before we set a date and time for this, I wanted to extend the
> invite to
> other folks and see if there's some interest. It be great to also
> have folks
> from Kolla and openstack-helm joining.
> 
> Looking forward to hearing ideas and hacking with y'all,
> Flavio
> 
> -- 
> @flaper87
> Flavio Percoco
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> 
> -- 
> 
> Port.direct <https://port.direct>
> 
>   
> 
> Pete Birley / Director 
> pete@port.direct <mailto:pete@port.direct> / +447446862551
> 
> *PORT.*DIRECT 
> United Kingdom 
> https://port.direct
> 
> This e-mail message may contain confidential or legally privileged
> information and is intended only for the use of the intended
> recipient(s). Any unauthorized disclosure, dissemination, distribution,
> copying or the taking of any action in reliance on the information
> herein is prohibited. E-mails are not secure and cannot be guaranteed to
> be error free as they can be intercepted, amended, or contain viruses.
> Anyone who communicates with u

[openstack-dev] [TripleO][Networking] Routed Networking Deployment Specs

2017-02-01 Thread Dan Sneddon
I have published specs [1] [2] [3] [4] and blueprints [5] [6] [7] [8] for
review for Pike that cover the changes required to deploy onto routed (AKA
spine-and-leaf or Clos) networks with TripleO. These need some review, and
there are still some technical decisions that need to be nailed down. I
also need some volunteers to help me implement this, in particular with the
changes required to the TripleO Heat Templates to allow for multiple
subnets per network.

I tried to focus on the problem rather than the solution, but where I
thought there was a clear solution, I provided as much detail as I could.
There are some instances where I presented more than one possible solution.
These specs are quite lengthy at the moment, but I expect that we can trim
them down as we narrow the focus and make final technical decisions as a group.

Please review these specs when you have some time, all feedback is welcome.
This is one of our highest priority features for Pike. Please let me know
if you are interested in helping with this effort.

[1] - https://review.openstack.org/421009  (master blueprint)
[2] - https://review.openstack.org/421010  (routed ctlplane IP)
[3] - https://review.openstack.org/421011  (Ironic Inspector)
[4] - https://review.openstack.org/425464  (THT changes)

[5] -
https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-deployment
[6] -
https://blueprints.launchpad.net/tripleo/+spec/tripleo-predictable-ctlplane-ips
[7] -
https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-ironic-inspector
[8] -
https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-templates
--
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Dan Sneddon
On 01/23/2017 11:03 AM, Emilien Macchi wrote:
> Greeting folks,
> 
> I would like to propose some changes in our core members:
> 
> - Remove Jay Dobies who has not been active in TripleO for a while
> (thanks Jay for your hard work!).
> - Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
> docker bits.
> - Add Steve Backer on os-collect-config and also docker bits in
> tripleo-common and tripleo-heat-templates.
> 
> Indeed, both Flavio and Steve have been involved in deploying TripleO
> in containers, their contributions are very valuable. I would like to
> encourage them to keep doing more reviews in and out container bits.
> 
> As usual, core members are welcome to vote on the changes.
> 
> Thanks,
> 

+1, thanks for all the work you did in the past, Jay!

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Defining Custom Deployment Networks

2017-01-19 Thread Dan Sneddon
I would like to call attention to two patches which Steven Hardy
proposed for Ocata to add the ability to create custom deployment
networks [1] [2]. This would allow the use of networks other than the
built-in 6 networks. These have gotten a little attention, and a couple
of alternative methods were proposed.

I would like to get this hashed out in time for the custom networks to
land in Ocata. This is going to be a dependency for much of the network
development that is planned for Pike, and I think it would be a huge
benefit to users of TripleO who plan to deploy Ocata.

So far there has been a concern raised about where to store the network
data (Mistral, Heat, Swift, ???), and we need some clarification and
discussion on that point. Another concern was raised about using j2 for
the template format. If people could take a moment to look at these
short reviews and chime in, that will help us move toward a consensus
approach.

[1] - https://review.openstack.org/#/c/409920

[2] - https://review.openstack.org/#/c/409921
-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Network Configuration in TripleO UI

2016-12-13 Thread Dan Sneddon
On 12/08/2016 08:10 PM, Jason Rist wrote:
> On 12/08/2016 05:28 PM, Dan Sneddon wrote:
>> On 12/08/2016 06:05 AM, Jiri Tomasek wrote:
>>> Hi all,
>>>
>>> I've been investigating how to implement TripleO network configuration
>>> in TripleO UI. Based on my findings I'd like to propose a solution.
>>>
>>> tl;dr proposal: Slightly refactor Network environment files to match
>>> GUI usage, Use Jinja Templating to generate dynamic parts of the
>>> templates/environments
>>>
>>>
>>> # Overview
>>>
>>> I've used Ben Nemec's amazing Network template generator as a reference
>>> to help me understand how the network configuration works [1]. In
>>> general the process of configuring the network in TripleO is:
>>>
>>> Define which Networks we intend to use -> Assign Roles to the Networks
>>> (+ Assign Role Services to the Network) -> Generate NIC config
>>> templates based on previous information
>>>
>>>
>>> # Deeper dive into templates
>>>
>>> We currently have 2 environment files in THT [2] which define network
>>> configuration:
>>>
>>> network-environment.yaml [3] - holds the information on NIC
>>> configuration for each Role using
>>> OS::TripleONet::SoftwareConfig resource + related
>>> parameter configuration
>>>
>>> network-isolation.yaml [4]
>>> - defines the list of networks using
>>> OS::TripleO::Network:: resource
>>> - defines ports configuration for each network using
>>> OS::TripleO::Network::Ports::VipPort (note that both
>>> resources point to the static templates - those templates don't require
>>> any manual modification)
>>> - holds  Roles - Networks assignment using
>>> OS::TripleOPorts::Port for each role and
>>> storage (again, templates referenced by those resources don't require
>>> any modification)
>>>
>>> User is intended to go ahead and modify those environments and provide
>>> NIC config templates to achieve a network configuration that matches
>>> his needs.
>>>
>>>
>>> # How GUI works
>>>
>>> Before proceeding to proposed changes I need to describe briefly how
>>> TripleO UI works. TripleO UI is using THT as a source of truth, which
>>> means that it is trying not to add any additional business logic or
>>> manipulate templates. Rather it uses environment files as a 'features'
>>> which user can enable or disable depending on the needs of the
>>> deployment. The information about inter-environment relationships is
>>> tracked in capabilities-map.yaml which is also part of the THT. Based
>>> on these choices, UI allows user to configure parameters for those
>>> features. The parameter values and information about which environments
>>> are selected is stored in mistral environment. This approach leaves the
>>> plan templates intact. Huge benefit of this approach is that UI (or
>>> tripleo-common) does not need to hold explicit business logic related
>>> to certain deployment features as it is purely driven by THT. Also
>>> Adding a new feature involves only providing the templates/environments
>>> and it automatically appears as an option in UI.
>>>
>>> To achieve best user experience while using this approach, the
>>> environment files need to be defined in a granular manner, so they
>>> don't require user to modify them and each describe an isolated 'feature'.
>>>
>>> Roles and Network Configuration are exceptions to this concept as they
>>> require modification/generation of the templates/environments and
>>> therefore they use Jinja templating to achieve that.
>>>
>>>
>>> # The proposal
>>>
>>> So having described previous, here is the approach I think we should
>>> use to achieve network configuration using TripleO UI:
>>>
>>> 1. Put networks definitions into separate environment for each network:
>>> - this way GUI can provide a list of networks available to use and let
>>> user select which of them he wants to use. These environments are not
>>> dynamic and if user wants to add a new network, he does so by creating
>>> new templates and environment for it. UI also provides means to
>>> configure parameters for each network at this point (if needed).
>>>
>>> For example the environment for a Storage Network looks like

Re: [openstack-dev] [TripleO] Re-defining network templates/isolation

2016-12-12 Thread Dan Sneddon
Responses inline...

- Original Message -
> On Mon, Dec 12, 2016 at 12:12:30PM -0500, Tim Rozet wrote:
> > Hello,
> > I wanted to get thoughts about re-thinking how users configure and create
> > new networks with OOO.  The current way to configure network settings for
> > a deployment requires creating nic + network environment templates, and
> > updating the network isolation resource registry.  I think a better
> > approach could consolidate all of the network settings for a deployment
> > into a single yaml file, and then parse that information to create the
> > appropriate nic and network env templates.  We do that in OPNFV Apex with
> > a combination of python and jinja2 using this unified template format:
> > 
> > https://github.com/opnfv/apex/blob/master/config/network/network_settings.yaml
> 
> Thanks for sharing, and for raising this issue Tim.
> 
> Strangely enough I was thinking along similar lines recently and I started
> hacking on some prototype code, just pushed here:
> 
> 
> https://review.openstack.org/#/c/409920
> https://review.openstack.org/#/c/409921
>
> That was originally related to fixing this bug where network isolation is
> a little inconvenient to use when defining custom roles:
> 
> https://bugs.launchpad.net/tripleo/+bug/1633090
> 
> Basically I agree we need some way to define per-network data that can then
> be consumed by jinja2 when we render templates for each role.

Coincidentally, I was working along similar lines in order to support multiple
routed subnets per network (for spine-and-leaf networking). In order to avoid
an exponential number of templates, it makes sense to work from a map of data
that defines the various networks, subnets, default routes, etc. that will
vary from rack to rack with routed networking, and then generate the
templates from that information.
 
> > Furthermore consider cdefining new networks in OOO.  Think about how much
> > is involved in creating a new network, subnet, port definition +
> > net_ip_map for that network, VIP. If you look at the tht/network
> > directory, almost all of the templates for ports and networks have the
> > exact same format.  I think you could make the example above dynamic so
> > that a user could define any new network there and the corresponding port,
> > network + subnet template files could be created on the fly.
> 
> Yes, I agree, this could be the next step after enabling the current
> networks for custom roles.  If we do the j2 implementation right for fixing
> the bug above, I think enabling arbitrary additional networks e.g via some
> j2 loops shouldn't be too much additional work.
>
> > I think this creates a much more simple interface for users by exposing
> > networking configuration they need, but also hiding redundant OOO/heat
> > template syntax they don't necessarily care about.  Thoughts?
> 
> So, yeah basically I agree - we should reduce the duplication between
> templates e.g for nic configuration, and j2 render them where possible for
> each role/network.
> 
> The trick here will be doing it so that we maintain backwards compatibility
> - if we're careful that's probably possible, but we'll have to figure out
>   ways to test that ensure we don't break existing users.

I agree that it is possible, and that it will require testing to ensure that
we maintain backwards compatibility.

> My suggestion would be to refactor things to resolve the bug above, and
> possibly also https://bugs.launchpad.net/tripleo/+bug/1625558 which I think
> should really be fixed by generating the nic configs, not adding even more
> example templates.

I am biased, since I wrote the patchset [1], but I would like to merge the
networker.yaml role and then can backport it prior to refactoring the
NIC configs. In general, though, I think we can limit the number of NIC
configs to one per physical topology, and then enable/disable interfaces,
VLANs, routes, etc. for each role based on network connectivity.

[1] - https://review.openstack.org/#/c/400390/

> If we can do some of that during the Ocata timefram, I expect fully
> composable/custom networks may be possible during Pike?
> 
> Steve


--
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Network Configuration in TripleO UI

2016-12-12 Thread Dan Sneddon
ides an action to fetch
> network-roles assignment data by parsing the network-isolation.yaml
> 
> In addition, user is able to assign individual Role Services to a
> Network. ServiceNetMap parameter is currently used for this. GUI needs
> to make sure that it represents Services-Networks assignment grouped by
> Role so it is ensured that user assigns Services to only networks where
> their Role is assigned.
> 
> 3. Generate NIC Config templates
> TripleO UI provides means to configure NICS, Bonds etc. for each Role,
> using the information from previous steps. It sends the data in json
> format to tripleo-common which then generates nic config templates for
> each Role based on network/config/nic-configs/role.j2.yaml Jinja
> template and generates network-environment.yaml based on
> network-environment.j2.yaml which references those templates.
> 
> Note that network-environment.j2.yaml probably can't be combined with
> network-isolation.j2.yaml as every time that environment would need to
> get updated, all data the template needs would need to be provided.
> 
> There are wireframes made by Liz Blanchard currently available [5],
> althought they are not exactly up to date to this proposal. Ideally 
> whole network configuration would happen on a screen based on the
> graphical representation of network [6].
> 
> 
> Any comments to this proposal are very welcome, please note that I am
> not a networking expert so I might be missing something.
> 
> There is a spec [7] in progress aimed for Ocata, but the feature will
> highly probably not land in Ocata, so we'll need to update the spec and
> move it to next cycle.
> 
> 
> [1]
> http://blog.nemebean.com/content/tripleo-network-isolation-template-generator
> 
> [2] https://github.com/openstack/tripleo-heat-templates
> [3]
> https://github.com/openstack/tripleo-heat-templates/blob/master/environments/network-environment.yaml
> 
> [4]
> https://github.com/openstack/tripleo-heat-templates/blob/master/environments/network-isolation.yaml
> 
> [5] https://openstack.invisionapp.com/share/UM87J4NBQ#/screens/179046668
> [6] https://openstack.invisionapp.com/share/UM87J4NBQ#/screens/179046679
> [7] https://review.openstack.org/#/c/396383/
> 
> Thanks
> Jirka
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Jirka,

Thanks for putting this together, I think this is a great summary of
the issue and a good thing to discuss.

I have done put some thought into how we might convert the existing
YAML NIC config templates to Jinja. Most of the templates are pretty
modular if you look closely. For instance, working from the
bond-with-vlans templates, the bridge definition is the same for all nodes:

  - type: ovs_bridge
name: bridge_name
dns_servers:
  get_param: DnsServers
members:
- type: ovs_bond
  name: bond1
  ovs_options:
get_param: BondInterfaceOvsOptions
  members:
  - type: interface
name: nic2
primary: true
  - type: interface
name: nic3

Then, we have each of the VLANs, which can be enabled or disabled based
on whether a network is enabled or not on a node. For instance, the
external network might be defined like this:

{% if role.has_net_external %}
- type: vlan
  device: bond1
  vlan_id:
get_param: ExternalNetworkVlanID
  addresses:
  - ip_netmask:
  get_param: ExternalIpSubnet
  {% if role.default_route_network == 'external' }
  routes:
  - default: true
next_hop:
  get_param: ExternalInterfaceDefaultRoute
  {% endif %}
{% endif %}

(note that there might be a better way to test for the external
network, consider the above psuedo-code)

The drawback to this approach is that we need a different template for
each physical configuration. If the controllers have more NICs than the
computes, then that requires a different base configuration.

So this would change the workflow slightly. You would first develop a
template that included all the possible networks that could appear in
that physical configuration, then enable them conditionally based on
which networks were assigned to each role. It would also increase the
complexity of each template when not using the GUI, but it would
probably still be easier to manage one or two complex template than 5
or more simple configurations.

[1] - https://openstack.invisionapp.com/share/UM87J4NBQ#/screens/179046679

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Network Configuration in TripleO UI

2016-12-08 Thread Dan Sneddon
if %}

However, the above might be a little difficult to model in a graphical
interface, especially if the configuration is significantly different.

So this would change the workflow slightly. You would first develop a
template that included all the possible networks that could appear in
that physical configuration, then enable them conditionally based on
which networks were assigned to each role. It would also increase the
complexity of developing templates by hand, but it would probably
still be easier to manage one or two complex template than 5 or more
simple configurations.

Since the physical template might be something that we could develop
automatically based on LLDP data collected during introspection, we
could potentially automate all parts of the configuration other than
customizing which networks to use where. Note that while it's easy
to conceptualize automatic template generation based on LLDP data
received from the switch, I also expect this to be pretty
error-prone. For instance, it may be difficult to detect which
interfaces should be part of a bond or bridge. Also, in cases where
a VLAN appears on more than one interface, it isn't easy to figure
out which interface Director should use for which. This could happen,
for instance, when the External VLAN is configured on one interface
for the public API, but it's also being shared on a bridge with many
VLANs trunked for Neutron routers.

[1] - https://openstack.invisionapp.com/share/UM87J4NBQ#/screens/179046679

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Julie Pichon for tripleo core

2016-11-22 Thread Dan Sneddon
On 11/22/2016 09:01 AM, Dougal Matthews wrote:
> Hi all,
> 
> I would like to propose we add Julie (jpich) to the TripleO core team
> for python-tripleoclient and tripleo-common. This nomination is based
> partially on review stats[1] and also my experience with her reviews
> and contributions.
> 
> Julie has consistently provided thoughtful and detailed reviews since
> the start of the Newton cycle. She has made a number of contributions
> which improve the CLI and has been extremely helpful with other tasks
> that don't often get enough attention (backports, bug
> triaging/reporting and improving our processes[2]).
> 
> I think she will be a valuable addition to the review team
> 
> Dougal
> 
> 
> [1]: http://stackalytics.com/report/contribution/tripleo-group/90
> [2]: https://review.openstack.org/#/c/352852/
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

+1!

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Is it time to reconsider how we configure OVS bridges in the overcloud?

2016-11-10 Thread Dan Sneddon
On 11/10/2016 07:22 AM, Brent Eagles wrote:
> Hi all,
> 
> 
> A recent critical issue that has come up that has compelled me to
> propose reconsidering our default and OVS based network configuration
> examples :   
> 
> https://bugs.launchpad.net/tripleo/+bug/1640812 - Network connectivity
> lost on node reboot
> 
> I've been thinking about it for awhile, but you could say this bug was
> the "last straw". 
> 
> While the precise root cause of this issue is still in question, part
> of the problem is that the overcloud nodes communicate with the
> undercloud and each other through an OVS bridge which is also used by
> the overcloud neutron service for external network traffic. For several
> valid reasons, neutron sets the OVS bridge fail_mode to secure (details
> in respective man pages, etc, etc). This mode is stored persistently so
> when the system is rebooted, the bridge is recreated with the secure
> fail_mode in place, blocking network traffic - including DHCP - until
> something comes along and starts setting up flow rules to allow traffic
> to flow.  Without an IP address, the node is effectively "unplugged".
> For some reason this isn't happening 100% of the time on the current
> version of CentOS (7.2), but seems to be pretty much 100% on RHEL 7.3. 
> 
> It raises the question if it is valid for neutron to modify an OVS
> bridge that it *did not create* in a fundamental way like this. If so,
> it implies a contract between the deployer and neutron that the
> deployer can make "no assumptions" about what will happen with the
> bridge once neutron has been configured to access it. If this implied
> contract is valid, required and acceptable, then bridges used for
> neutron should not be used for anything else. The implications with
> respect to tripleo is that we should reconsider how we use OVS bridges
> for network configuration in the overcloud. For example, in single NIC
> situations, instead of having:
> 
> (triple configured)
> - eth0
>   - br-ex -used for control plane access, internal api, management,
> external, etc. also neutron is configured to use this for the external
> traffic e.g. dataplane in our defaults, which is why the fail_mode gets
> altered
> 
> (neutron configured)
> 
> - br-int
> - br-tun
> 
> To something like:
> (triple configured)
> - eth0
>  - br-ctl - used as br-ex is currently used except neutron knows
> nothing about it.
> - br-ex -patched to br-ctl - ostensibly for external traffic and this
> is what neutron in the overcloud is configured to use
> (neutron configured)
> - br-int
> - br-tun
> 
> (In all cases, neutron configures patches, etc. between bridges *it
> knows about* as needed. That is, in the second case, tripleo would
> configure the patch between br-ctl and br-ex)
> 
> At the cost of an extra bridge (ovs bridge to ovs bridge with patch
> ports is allegedly cheap btw) we get:
>  1. an independently configured bridge for overcloud traffic insulates
> non-tenant node traffic against changes to neutron, including upgrades,
> neutron bugs, etc.
>  2. insulates neutron from changes to the underlying network that it
> doesn't "care" about.
>  3. In OVS only environments, the difference between a single nic
> environment and one where there is a dedicated nic for external traffic
> is, instead of a patch port from br-ctl to br-ex, it is directly
> connected to the nic for the external traffic. 
> 
> Even without the issue that instigated this message, I think that this
> is a change worth considering. 
> 
> 
> Cheers,
> 
> 
> Brent
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Brent,

Thanks for taking the time to analyze this situation. I see a couple of
potential issues with the topology you are suggesting.

First of all, what about the scenario where a system has only 2x10Gb
NICs, and the operator wishes to bond these together on a single
bridge? If we require separate bridges for Neutron than we do for the
control plane, then it would be impossible to configure a system with
only 2 NICs in a fault-tolerant way.

Second, there will be a large percentage of users who already have a
shared br-ex that wish to upgrade. Do we tell them that due to an
architectural change, they now must redeploy a new cloud with a new
topology to use the latest version?

So while I would be on-board with changing our default for new
installations, I don't think that relieves us of the responsibility to

[openstack-dev] [TripleO] dhclient spawned by dhcp-all-interfaces doesn't exit when os-net-config runs

2016-11-09 Thread Dan Sneddon
I just opened a bug [1] for behavior which has recently been observed
when deploying nodes with TripleO. The problem is that the dhclient
processes that are being started by the dhcp-all-interfaces element in
disk-image-builder are not stopping after os-net-config runs.

Step 1) Image deploys with udev rule to create
dhcp-interface@.service, which configures each interface via DHCP.

Step 2) The deployment scripts run, including os-net-config, which
configures and restarts the interfaces. The udev rule is removed.

At this point, the dhclient which was created by the udev rule is no
longer needed, except it is still running and configuring IP and routes
on the interface, possibly in conflict with the desired configuration.
For instance, the same IP appearing on a bridge and on an interface, or
a rogue default route and IP that hijack the default route.

I believe this behavior is new in RHEL 7.3, but I don't know if any
versions of CentOS are affected yet (testing is in progress).

Running 'systemctl restart network' after os-net-config runs will kill
the dhclient processes, so inserting that into the scripts after
os-net-config is run is one possible workaround, although the brief
interruption in networking might cause unknown issues in
high-availability environments.

Does anyone have a suggestion for a kinder, gentler, less hacky
approach than either restarting the network service or running kill on
the dhclient processes? Also, does anyone have any idea why running
"ifdown " followed by "ifup " doesn't stop the dhclient
process started by the udev rule? Or why this behavior appears to be
new to RHEL 7.3?

[1] - https://bugs.launchpad.net/tripleo/+bug/1640598

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] proposing Michele Baldessari part of core team

2016-11-09 Thread Dan Sneddon
On 11/04/2016 10:40 AM, Emilien Macchi wrote:
> MIchele Baldessari (bandini on IRC) has consistently demonstrated high
> levels of contributions in TripleO projects, specifically in High
> Availability area where's he's for us a guru (I still don't understand
> how pacemaker works, but hopefully he does).
> 
> He has done incredible work on composable services and also on
> improving our HA configuration by following reference architectures.
> Always here during meetings, and on #tripleo to give support to our
> team, he's a great team player and we are lucky to have him onboard.
> I believe he would be a great core reviewer on HA-related work and we
> expect his review stats to continue improving as his scope broadens
> over time.
> 
> As usual, feedback is welcome and please vote for this proposal!
> 
> Thanks,
> 

+1 from me, Michele has been contributing for a long time.

-- 
Dan Sneddon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Your draft logo & a sneak peek

2016-10-25 Thread Dan Sneddon
While I appreciate the desire to have stylistically consistent logos, my eyes 
don't perceive this logo as an owl. Hummingbird, maybe, or a sparrow or perhaps 
a parrot, but I don't see an owl. The current logo is not easy to mistake for 
another brand of fowl. 

>> Dan Sneddon  |  Senior Principal OpenStack Engineer  |  dsned...@redhat.com


> On Oct 25, 2016, at 11:56 AM, Steven Hardy  wrote:
> 
> Hi team,
> 
> I recently received a draft version of our project logo, using the mascot
> we selected together. A final version (and some cool swag) will be ready
> for us before the Project Team Gathering in February. Before they make our
> logo final, they want to be sure we're happy with our mascot.
> 
> We can discuss any concerns in Barcelona and you can also provide direct
> feedback to the designers: http://tinyurl.com/OSmascot . Logo feedback is
> due Friday, Nov. 11.
> 
> To get a sense of how ours stacks up to others, check out this sneak
> preview of several dozen draft logos from our community:
> https://youtu.be/JmMTCWyY8Y4.
> 
> The only comment I have made is this logo does lose some of the OoO imagery
> we had with the previous owl version - please feel free to provide feedback
> of your own via the url above, thanks!
> 
> Thanks!
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][puppet] Spine/Leaf: Adding Multiple Subnets to ironic-inspector-dnsmasq

2016-10-19 Thread Dan Sneddon
On 10/19/2016 10:33 AM, Dan Sneddon wrote:
> I am doing research to support the spec for TripleO deployment on
> routed networks [1]. I would like some input on how to represent
> multiple subnet ranges for the provisioning network in undercloud.conf.
> 
> The Ironic Inspector dnsmasq service is currently configured using the
> puppet-ironic module, and the range of IP addresses is taken directly
> from undercloud.conf. For example, here is the .erb which configures
> /etc/ironic-inspector/dnsmasq.conf if using TFTP [2]:
> 
> ## inspector_dnsmasq_tftp.erb ##
> port=0
> interface=<%= @dnsmasq_interface %>
> bind-interfaces
> dhcp-range=<%= @dnsmasq_ip_range %>,29
> dhcp-boot=pxelinux.0,localhost.localdomain,<%= @dnsmasq_local_ip %>
> dhcp-sequential-ip
> 
> 
> Since there is only one dnsmasq_ip_range, only a single subnet is
> served via DHCP. What I would like to do is extend the undercloud.conf
> to support multiple IP ranges, and I'm looking for input on the best
> way to represent the data.
> 
> I am not sure if we can be fully backwards-compatible here. My gut
> feeling is no, unless we leave the existing parameters as-is and add
> something like an "additional_inspection_ipranges" parameter. The data
> that will need to be represented for each subnet is:
> 
> * Network subnet
> * Start and end of inspection IP range
> * Subnet mask (could be determined by parsing cidr, like 172.20.1.0/24)
> * Gateway router for the subnet
> 
> We could potentially represent this data as a JSON, or as a list of
> strings. Here are some potential examples:
> 
> JSON:
> additional_inspection_ipranges = [
>   {
> "subnet": "172.20.1.0/24",
> "start": "172.20.1.100",
> "end": "172.20.1.120",
> "gateway": "172.20.1.254"
>   },
>   {
> "subnet": "172.20.2.0/24",
> "start": "172.20.2.100",
> "end": "172.20.2.120",
> "gateway": "172.20.2.254"
>   }
> ]
> 
> String:
> additional_inspection_ipranges =
> "172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254;172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"
> 
> Either of these might get unwieldy depending on the number of networks.
> Perhaps we could have a repeating parameter? Something like this:
> 
> additional_inspection_iprange =
> "172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254"
> additional_inspection_iprange =
> "172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"
> 
> I would like some feedback about how to represent this data in a way
> that it can be easily parsed by Puppet, while remaining readable. Any
> suggestions would be very much appreciated.
> 
> [1] - https://review.openstack.org/#/c/377088
> [2] -
> https://github.com/openstack/puppet-ironic/blob/master/templates/inspector_dnsmasq_tftp.erb
> 

After writing this, I realized that I neglected to present another data
point. The Neutron DHCP agent handles this situation very well. If
there are multiple subnets that belong to a network, the ranges are all
included, and each range has a tag that matches a default-gateway that
is taken from the subnet object.

Would it be feasible to modify ironic-inspector and
ironic-inspector-dnsmasq to instead get it's configuration from a given
network. So if the provisioning network is "ctlplane", then the values
would be taken from the "ctlplane" network. This would allow us to
manipulate the values for the ironic-inspector-dnsmasq via Heat
templates or even the Neutron command-line/python client.

The advantage of this approach is that it may have side benefits for
tenant bare metal use cases.

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ironic][puppet] Spine/Leaf: Adding Multiple Subnets to ironic-inspector-dnsmasq

2016-10-19 Thread Dan Sneddon
I am doing research to support the spec for TripleO deployment on
routed networks [1]. I would like some input on how to represent
multiple subnet ranges for the provisioning network in undercloud.conf.

The Ironic Inspector dnsmasq service is currently configured using the
puppet-ironic module, and the range of IP addresses is taken directly
from undercloud.conf. For example, here is the .erb which configures
/etc/ironic-inspector/dnsmasq.conf if using TFTP [2]:

## inspector_dnsmasq_tftp.erb ##
port=0
interface=<%= @dnsmasq_interface %>
bind-interfaces
dhcp-range=<%= @dnsmasq_ip_range %>,29
dhcp-boot=pxelinux.0,localhost.localdomain,<%= @dnsmasq_local_ip %>
dhcp-sequential-ip


Since there is only one dnsmasq_ip_range, only a single subnet is
served via DHCP. What I would like to do is extend the undercloud.conf
to support multiple IP ranges, and I'm looking for input on the best
way to represent the data.

I am not sure if we can be fully backwards-compatible here. My gut
feeling is no, unless we leave the existing parameters as-is and add
something like an "additional_inspection_ipranges" parameter. The data
that will need to be represented for each subnet is:

* Network subnet
* Start and end of inspection IP range
* Subnet mask (could be determined by parsing cidr, like 172.20.1.0/24)
* Gateway router for the subnet

We could potentially represent this data as a JSON, or as a list of
strings. Here are some potential examples:

JSON:
additional_inspection_ipranges = [
  {
"subnet": "172.20.1.0/24",
"start": "172.20.1.100",
"end": "172.20.1.120",
"gateway": "172.20.1.254"
  },
  {
"subnet": "172.20.2.0/24",
"start": "172.20.2.100",
"end": "172.20.2.120",
"gateway": "172.20.2.254"
  }
]

String:
additional_inspection_ipranges =
"172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254;172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"

Either of these might get unwieldy depending on the number of networks.
Perhaps we could have a repeating parameter? Something like this:

additional_inspection_iprange =
"172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254"
additional_inspection_iprange =
"172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"

I would like some feedback about how to represent this data in a way
that it can be easily parsed by Puppet, while remaining readable. Any
suggestions would be very much appreciated.

[1] - https://review.openstack.org/#/c/377088
[2] -
https://github.com/openstack/puppet-ironic/blob/master/templates/inspector_dnsmasq_tftp.erb
-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] Isolated Network Testing

2016-10-12 Thread Dan Sneddon
I recently evaluated our needs for testing coverage for TripleO
isolated networking. I wanted to post my thoughts on the matter for
discussion, which will hopefully lead to a shared understanding of what
improvements we need to make. I think we can cover the majority of
end-user requirements by testing the following minimum scenarios:

1. single-nic-vlans (one nic, all VLANs trunked, great for virt and POCs)

2. Provisioning + bond (to test basic bonding functionality)

3. Bonded provisioning (perhaps one bond with all VLANs)

4. Spine and leaf (in the near future)

Within those four scenarios, we should ensure that we are testing both
IPv4 and IPv6, and both traditional Neutron SNAT/Floating IPs and DVR.

The first scenario is well covered. I think scenario 2 is covered by a
review posted by Ben Nemec recently [1].

I would very much like to see us testing scenario 3 with a resilient
bond for the provisioning interface as well. This used to require LACP
and fallback to a single link, but I believe recent changes to the PXE
boot images may allow this over links without special switch
configuration. I'm currently doing testing in my lab, I hope I can work
with the TripleO CI team to help make this happen upstream.

Spine and leaf (routed networks) support may require specific
configuration of the routing hardware in order to support PXE booting
across router boundaries. Specifically, a DHCP proxy needs to be
configured in order to forward DHCP requests from a remote VLAN to the
Undercloud. If this is not possible in our bare-metal CI environments,
then we may need to develop a method of testing this in OVB.

I'm very interested in finding out about whether it may be possible to
have DHCP proxy (or "DHCP helper-address") configured on the router
hardware for CI VLANs. If we can deploy this in bare metal, I think it
will save us a lot of time and effort over recreating a routed
environment in OVB. I believe we could use use Open Daylight or another
OpenFlow controller to simulate routers in virtual environments, or
perhaps use dnsmasq in DHCP proxy mode on the OVB host to forward
requests from the various bridges representing remote VLANs to the
Undercloud br-ctlplane bridge. But it would be a fair amount of work to
put that together.

I don't believe we currently test IPv6 or DVR (please correct me if I'm
mistaken). Do we have plans in the works to add these to any jobs?

Finally, I wonder if we need to test any exotic configurations, such as
OVS+DPDK, OpenDaylight, etc.

OVS+DPDK would require compatible hardware. I'm interested in hearing
feedback about how critical this would be in the grand scheme of
things. It isn't yet clear to me that OVS+DPDK is going to have
widespread adoption, but I do recognize that there are some NFV users
that depend on this technology.

OpenDaylight does not require hardware changes AFAIK, but the drivers
and network interface config differs significantly from ML2+OVS. I'm
helping some ODL developers make changes that will allow deployment via
TripleO, but these changes won't be tested by CI.

Of course, there are elements of OVS+DPDK and ODL that get tested as
part of Neutron CI, but now we are implementing TripleO-based
deployment of these technologies, I wonder if we should endeavor to
test them in CI. I suppose that begs the question, if we are testing
those, then why not Contrail, etc.? I don't know where we draw the
line, but it seems that we might want to at least periodically test
deploying some other Neutron drivers via TripleO.

[1] - https://review.openstack.org/#/c/385562

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Getting the UI to talk with Undercloud API service endpoints

2016-10-07 Thread Dan Sneddon
Do you know how awesome it would be if you put this idea into a Blueprint at
http://blueprints.launchpad.net? That would be super-awesome.
File it under tripleo-ui project here if you have a few minutes:

https://blueprints.launchpad.net/specs/+new

- Original Message -
> Hi -
> 
> Great suggestions, Dan.
> 
> To recap, we followed that up with a few other ideas on irc and we eventually
> came to a point to test some of this, with slight modification.
> 
> UI also ships with a configuration file that can override the endpoint
> information received from Keystone. The file is located at
> /var/www/openstack-tripleo-ui/dist/tripleo_ui_config.js.
> 
> Part of making this work means enabling SSL on the Undercloud when the UI
> component is selected for installation in undercloud.conf. I think this is
> going to be a pretty reasonable request, but I'm interested in hearing
> feedback from this, and what other implications it may have, that I can't
> think of. The changes I made were all one-off, unmanaged changes, just to
> test this idea out. I'll be doing some more tests but will probably be
> looking for acceptance shortly.
> 
> Once SSL was enabled on the Undercloud, I made two edits to haproxy.cfg that
> were pretty straightforward: added a 'listen' server directive for UI to
> both terminate SSL and forward the request to Apache, and added a 'bind'
> statement for each service that UI expects to talk to (keystone, heat,
> ironic, mistral, swift, zaqar-websocket).
> 
> Once those configuration changes were made, I had a very pleasant experience
> using the UI. It worked exactly as expected. I think this might be a viable
> option.
> 
> Thoughts?
> 
> Thanks!
> -dant
> 
> 
> 
> 
> 
> 
> On Fri, Sep 30, 2016 at 12:21 PM, Dan Sneddon < dsned...@redhat.com > wrote:
> 
> 
> 
> Thinking about this a little more, creating a new unified endpoint on the
> same port as the UI doesn't solve the problem at hand. The UI will use the
> service catalog to find endpoints, so we would need to change the endpoints
> in the service catalog, which changes the flow for the underlying services
> as well.
> 
> Simply moving the control plane services to the external network won't work
> well in environments where the control plane is isolated and non-routed. The
> Undercloud can forward packets, but then becomes a bottleneck and a SPOF.
> 
> A few approaches come to mind, but none of these are quick fixes:
> 
> * Change the UI to get its list of endpoints from somewhere other than the
> service catalog and customize this with URLs that point to the Public VIP.
> Duplicate the services required for the UI on both control plane and
> external network. This might make it possible to make all connections over
> port 443, which is more firewall-friendly (which would be desirable or not
> depending on what kind of firewalling and traffic management is wanted).
> 
> * Relax the rp_filter settings on the Controllers so they accept packets
> destined for the external network on their control plane interfaces; add a
> static route to the Public VIP via the control plane VIP on all
> non-controller nodes. Modify the service catalog to point to the public VIP
> for the services the UI needs. This would need to be combined with a
> security review to determine if additional iptables rules are required.
> 
> * Split the service catalog, so we have an internal and an external view
> depending on where the query came from. I'm not sure how feasible this is.
> 
> Of these, I think the rp_filter settings are the only ones that could be done
> solely with TripleO code changes. That might be worth investigating.
> 
> 
> 
> 
> 
> 
> Dan Sneddon | Principal OpenStack Engineer | dsned...@redhat.com
> 
> On Sep 30, 2016, at 11:36 AM, Dan Sneddon < dsned...@redhat.com > wrote:
> 
> 
> 
> 
> I don't think we can rely on the Undercloud as an API proxy unless we address
> the lack of HA on the Undercloud.
> 
> Wouldn't this be better implemented as as a single, name-based HAProxy
> instance running SSL on port 443 on the overcloud Public VIP? Then we could
> have the same endpoint for Horizon and every other API.
> 
> I actually implemented this scheme in Havana before I joined Red Hat. At the
> time, we had to have a complex HAProxy config and patch the end points to
> support name-based URLs. I think some work has been done in OpenStack now to
> support this model, but I'm not sure where it stands.
> 
> 
> 
> 
> 
> 
> Dan Sneddon | Principal OpenStack Engineer | dsned...@redhat.com
> 
> On Sep 30, 2016, at 9:44 AM, Dan Trainor < dtrai...@redhat.c

Re: [openstack-dev] [tripleo] Getting the UI to talk with Undercloud API service endpoints

2016-09-30 Thread Dan Sneddon
Thinking about this a little more, creating a new unified endpoint on the same 
port as the UI doesn't solve the problem at hand. The UI will use the service 
catalog to find endpoints, so we would need to change the endpoints in the 
service catalog, which changes the flow for the underlying services as well.

Simply moving the control plane services to the external network won't work 
well in environments where the control plane is isolated and non-routed. The 
Undercloud can forward packets, but then becomes a bottleneck and a SPOF. 

A few approaches come to mind, but none of these are quick fixes:

* Change the UI to get its list of endpoints from somewhere other than the 
service catalog and customize this with URLs that point to the Public VIP. 
Duplicate the services required for the UI on both control plane and external 
network. This might make it possible to make all connections over port 443, 
which is more firewall-friendly (which would be desirable or not depending on 
what kind of firewalling and traffic management is wanted).

* Relax the rp_filter settings on the Controllers so they accept packets 
destined for the external network on their control plane interfaces; add a 
static route to the Public VIP via the control plane VIP on all non-controller 
nodes. Modify the service catalog to point to the public VIP for the services 
the UI needs. This would need to be combined with a security review to 
determine if additional iptables rules are required. 

* Split the service catalog, so we have an internal and an external view 
depending on where the query came from. I'm not sure how feasible this is.

Of these, I think the rp_filter settings are the only ones that could be done 
solely with TripleO code changes. That might be worth investigating.

>> Dan Sneddon  |  Principal OpenStack Engineer  |  dsned...@redhat.com


> On Sep 30, 2016, at 11:36 AM, Dan Sneddon  wrote:
> 
> I don't think we can rely on the Undercloud as an API proxy unless we address 
> the lack of HA on the Undercloud. 
> 
> Wouldn't this be better implemented as as a single, name-based HAProxy 
> instance running SSL on port 443 on the overcloud Public VIP? Then we could 
> have the same endpoint for Horizon and every other API. 
> 
> I actually implemented this scheme in Havana before I joined Red Hat. At the 
> time, we had to have a complex HAProxy config and patch the end points to 
> support name-based URLs. I think some work has been done in OpenStack now to 
> support this model, but I'm not sure where it stands. 
> 
>>> Dan Sneddon  |  Principal OpenStack Engineer  |  dsned...@redhat.com
> 
> 
>> On Sep 30, 2016, at 9:44 AM, Dan Trainor  wrote:
>> 
>> Hi -
>> 
>> I re-read your email a few times and like a few of the things that I see, 
>> but I'd love some more clarification.  As I read it, a few of these things 
>> conflict.  I believe you're suggesting that we don't make these services 
>> listen on a public interface due to security concerns (and of course, 
>> enabling SSL would break this because haproxy would listen on these 
>> interfaces/ports), but this approach would be acceptable if HAProxy was 
>> listening on these ports, terminating SSL, and sending them to each 
>> respective service backend.  Am I correct i understanding this?
>> 
>> Are you suggesting that these endpoint ports would still be externally 
>> accessible on the primary (public) interface of the Undercloud, but just 
>> managed by HAProxy?  I think that's an acceptable approach.  Even if these 
>> endpoints are, like you suggested, listening on a separate network or IP as 
>> the Undercloud's primary interface, at least then it would be easier for 
>> organizations to enforce network access policies to these ports, and 
>> subsequently, these services that UI needs to talk to directly.
>> 
>> I'm also perfectly fine with suggesting that if UI is installed, then this 
>> forces the Undercloud to be SSL enabled.  This would be a good way to move 
>> the idea of a secured, by default SSL-enabled Undercloud forward a little 
>> more, which is something we'd definitely like to see more.
>> 
>> Thoughts?
>> 
>> Thanks
>> -dant
>> 
>> 
>> 
>>> On Thu, Sep 29, 2016 at 9:01 AM, Dan Trainor  wrote:
>>> Hi, Juan -
>>> 
>>>> Actually, the third option is also not an option in the current undercloud 
>>>> setup, since making the services listen in 0.0.0.0 will break HAProxy. So 
>>>> when you're deploying with TLS things will break since we use HAProxy to 
>>>> terminate TLS connections.
>>> 
>>> Ah, that's correct, isn&#

Re: [openstack-dev] [tripleo] Getting the UI to talk with Undercloud API service endpoints

2016-09-30 Thread Dan Sneddon
I don't think we can rely on the Undercloud as an API proxy unless we address 
the lack of HA on the Undercloud. 

Wouldn't this be better implemented as as a single, name-based HAProxy instance 
running SSL on port 443 on the overcloud Public VIP? Then we could have the 
same endpoint for Horizon and every other API. 

I actually implemented this scheme in Havana before I joined Red Hat. At the 
time, we had to have a complex HAProxy config and patch the end points to 
support name-based URLs. I think some work has been done in OpenStack now to 
support this model, but I'm not sure where it stands. 

>> Dan Sneddon  |  Principal OpenStack Engineer  |  dsned...@redhat.com


> On Sep 30, 2016, at 9:44 AM, Dan Trainor  wrote:
> 
> Hi -
> 
> I re-read your email a few times and like a few of the things that I see, but 
> I'd love some more clarification.  As I read it, a few of these things 
> conflict.  I believe you're suggesting that we don't make these services 
> listen on a public interface due to security concerns (and of course, 
> enabling SSL would break this because haproxy would listen on these 
> interfaces/ports), but this approach would be acceptable if HAProxy was 
> listening on these ports, terminating SSL, and sending them to each 
> respective service backend.  Am I correct i understanding this?
> 
> Are you suggesting that these endpoint ports would still be externally 
> accessible on the primary (public) interface of the Undercloud, but just 
> managed by HAProxy?  I think that's an acceptable approach.  Even if these 
> endpoints are, like you suggested, listening on a separate network or IP as 
> the Undercloud's primary interface, at least then it would be easier for 
> organizations to enforce network access policies to these ports, and 
> subsequently, these services that UI needs to talk to directly.
> 
> I'm also perfectly fine with suggesting that if UI is installed, then this 
> forces the Undercloud to be SSL enabled.  This would be a good way to move 
> the idea of a secured, by default SSL-enabled Undercloud forward a little 
> more, which is something we'd definitely like to see more.
> 
> Thoughts?
> 
> Thanks
> -dant
> 
> 
> 
>> On Thu, Sep 29, 2016 at 9:01 AM, Dan Trainor  wrote:
>> Hi, Juan -
>> 
>>> Actually, the third option is also not an option in the current undercloud 
>>> setup, since making the services listen in 0.0.0.0 will break HAProxy. So 
>>> when you're deploying with TLS things will break since we use HAProxy to 
>>> terminate TLS connections.
>> 
>> Ah, that's correct, isn't it.  
>> 
>>  
>>> On the other hand, we also don't want services to listen on 0.0.0.0 since 
>>> that would become a security concern. We should instead be blocking 
>>> everything we don't need to have exposed (as we've done with the 
>>> undercloud's database and rabbitmq).
>> 
>> I don't disagree that we want to focus on least privilege, but we do have 
>> documentation that talks about how to deal with this.  I addressed this in 
>> my previous message, even if only to illustrate my understanding that there 
>> would be concerns around this.  See more comments about this down below...
>>  
>>> 
>>> Now, as we're trying to move to have more convergence between the 
>>> undercloud and the overcloud (trying to deploy the undercloud via heat 
>>> also, as Dan Prince has mentioned), I think some aspecs of this will bring 
>>> a solution to this problem. for instance, just like we already do in the 
>>> overcloud, we could have the undercloud's HAProxy always terminate the 
>>> endpoints, which I'm attempting with these two patches: 
>>> https://review.openstack.org/#/c/360366  
>>> https://review.openstack.org/#/c/360368 . Furthermore, we could have the 
>>> public endpoints in HAProxy listen on a separate network that's accessible 
>>> externally, also as we do in the overcloud. That way we don't need to 
>>> change the actual interfaces the services are listening on, and rely on 
>>> HAProxy, getting closer to how we do things in the overcloud. It seems to 
>>> me that it would help solve the problem.
>> 
>> I like that idea.  Though, this effectively shifts the process of having 
>> these services themselves listen on different IPs/ports and offloads that to 
>> HAProxy.  Whatever security concerns we have with opening up network 
>> communications would still exist, but I think that would be more broadly 
>> accepted considering these connections are no