Re: [openstack-dev] Request for comments for a possible solution

2014-12-20 Thread Mike Kolesnik
Hi Vivek,

Replies inline.

Regards,
Mike

- Original Message -
> Hi Mike,
> 
> Few clarifications inline [Vivek]
> 
> -Original Message-
> From: Mike Kolesnik [mailto:mkole...@redhat.com]
> Sent: Thursday, December 18, 2014 10:58 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for
> comments for a possible solution
> 
> Hi Mathieu,
> 
> Thanks for the quick reply, some comments inline..
> 
> Regards,
> Mike
> 
> - Original Message -
> > Hi mike,
> >
> > thanks for working on this bug :
> >
> > On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton  wrote:
> > >
> > >
> > > On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:
> > >
> > >>Hi Neutron community members.
> > >>
> > >>I wanted to query the community about a proposal of how to fix HA
> > >>routers not working with L2Population (bug 1365476[1]).
> > >>This bug is important to fix especially if we want to have HA
> > >>routers and DVR routers working together.
> > >>
> > >>[1] https://bugs.launchpad.net/neutron/+bug/1365476
> > >>
> > >>What's happening now?
> > >>* HA routers use distributed ports, i.e. the port with the same IP &
> > >>MAC
> > >>  details is applied on all nodes where an L3 agent is hosting this
> > >>router.
> > >>* Currently, the port details have a binding pointing to an
> > >>arbitrary node
> > >>  and this is not updated.
> > >>* L2pop takes this "potentially stale" information and uses it to create:
> > >>  1. A tunnel to the node.
> > >>  2. An FDB entry that directs traffic for that port to that node.
> > >>  3. If ARP responder is on, ARP requests will not traverse the network.
> > >>* Problem is, the master router wouldn't necessarily be running on
> > >>the
> > >>  reported agent.
> > >>  This means that traffic would not reach the master node but some
> > >>arbitrary
> > >>  node where the router master might be running, but might be in
> > >>another
> > >>  state (standby, fail).
> > >>
> > >>What is proposed?
> > >>Basically the idea is not to do L2Pop for HA router ports that
> > >>reside on the tenant network.
> > >>Instead, we would create a tunnel to each node hosting the HA router
> > >>so that the normal learning switch functionality would take care of
> > >>switching the traffic to the master router.
> > >
> > > In Neutron we just ensure that the MAC address is unique per network.
> > > Could a duplicate MAC address cause problems here?
> >
> > gary, AFAIU, from a Neutron POV, there is only one port, which is the
> > router Port, which is plugged twice. One time per port.
> > I think that the capacity to bind a port to several host is also a
> > prerequisite for a clean solution here. This will be provided by
> > patches to this bug :
> > https://bugs.launchpad.net/neutron/+bug/1367391
> >
> >
> > >>This way no matter where the master router is currently running, the
> > >>data plane would know how to forward traffic to it.
> > >>This solution requires changes on the controller only.
> > >>
> > >>What's to gain?
> > >>* Data plane only solution, independent of the control plane.
> > >>* Lowest failover time (same as HA routers today).
> > >>* High backport potential:
> > >>  * No APIs changed/added.
> > >>  * No configuration changes.
> > >>  * No DB changes.
> > >>  * Changes localized to a single file and limited in scope.
> > >>
> > >>What's the alternative?
> > >>An alternative solution would be to have the controller update the
> > >>port binding on the single port so that the plain old L2Pop happens
> > >>and notifies about the location of the master router.
> > >>This basically negates all the benefits of the proposed solution,
> > >>but is wider.
> > >>This solution depends on the report-ha-router-master spec which is
> > >>currently in the implementation phase.
> > >>
> > >>It's important to note that these two solutions don't collide and
> > >>could be done independently. The one I'm proposing just makes more
> > >>sense from an HA viewpoint because of it's benefits which fit the HA
> > >>methodology of being fast & having as little outside dependency as
> > >>possible.
> > >>It could be done as an initial solution which solves the bug for
> > >>mechanism drivers that support normal learning switch (OVS), and
> > >>later kept as an optimization to the more general, controller based,
> > >>solution which will solve the issue for any mechanism driver working
> > >>with L2Pop (Linux Bridge, possibly others).
> > >>
> > >>Would love to hear your thoughts on the subject.
> >
> > You will have to clearly update the doc to mention that deployment
> > with Linuxbridge+l2pop are not compatible with HA.
> 
> Yes this should be added and this is already the situation right now.
> However if anyone would like to work on a LB fix (the general one or some
> specific one) I would gladly help with reviewing it.
> 
> >
> > Moreover, this solution is downgrading the l2pop solution, by
> > disabling the ARP-responder when

Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution

2014-12-20 Thread Mike Kolesnik
Hi Mathieu,

Comments inline

Regards,
Mike

- Original Message -
> Mike,
> 
> I'm not even sure that your solution works without being able to bind
> a router HA port to several hosts.
> What's happening currently is that you :
> 
> 1.create the router on two l3agent.
> 2. those l3agent trigger the sync_router() on the l3plugin.
> 3. l3plugin.sync_routers() will trigger l2plugin.update_port(host=l3agent).
> 4. ML2 will bind the port to the host mentioned in the last update_port().
> 
> From a l2pop perspective, this will result in creating only one tunnel
> to the host lastly specified.
> I can't find any code that forces that only the master router binds
> its router port. So we don't even know if the host which binds the
> router port is hosting the master router or the slave one, and so if
> l2pop is creating the tunnel to the master or to the slave.
> 
> Can you confirm that the above sequence is correct? or am I missing
> something?

Are you referring to the alternative solution?

In that case it seems that you're correct so that there would need to be
awareness of the master router at some level there as well.
I can't say for sure as I've been thinking on the proposed solution with
no FDBs so there would be some issues with the alternative that need to
be ironed out.

> 
> Without the capacity to bind a port to several hosts, l2pop won't be
> able to create tunnel correctly, that's the reason why I was saying
> that a prerequisite for a smart solution would be to first fix the bug
> :
> https://bugs.launchpad.net/neutron/+bug/1367391
> 
> DVR Had the same issue. Their workaround was to create a new
> port_binding tables, that manages the capacity for one DVR port to be
> bound to several host.
> As mentioned in the bug 1367391, this adding a technical debt in ML2,
> which has to be tackle down in priority from my POV.

I agree that this would simplify work but even without this bug fixed we
can achieve either solution.

We have already knowledge of the agents hosting a router so this is
completely doable without waiting for fix for bug 1367391.

Also from my understanding the bug 1367391 is targeted at DVR only, not
at HA router ports.

> 
> 
> On Thu, Dec 18, 2014 at 6:28 PM, Mike Kolesnik  wrote:
> > Hi Mathieu,
> >
> > Thanks for the quick reply, some comments inline..
> >
> > Regards,
> > Mike
> >
> > - Original Message -
> >> Hi mike,
> >>
> >> thanks for working on this bug :
> >>
> >> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton  wrote:
> >> >
> >> >
> >> > On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:
> >> >
> >> >>Hi Neutron community members.
> >> >>
> >> >>I wanted to query the community about a proposal of how to fix HA
> >> >>routers
> >> >>not
> >> >>working with L2Population (bug 1365476[1]).
> >> >>This bug is important to fix especially if we want to have HA routers
> >> >>and
> >> >>DVR
> >> >>routers working together.
> >> >>
> >> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476
> >> >>
> >> >>What's happening now?
> >> >>* HA routers use distributed ports, i.e. the port with the same IP & MAC
> >> >>  details is applied on all nodes where an L3 agent is hosting this
> >> >>router.
> >> >>* Currently, the port details have a binding pointing to an arbitrary
> >> >>node
> >> >>  and this is not updated.
> >> >>* L2pop takes this "potentially stale" information and uses it to
> >> >>create:
> >> >>  1. A tunnel to the node.
> >> >>  2. An FDB entry that directs traffic for that port to that node.
> >> >>  3. If ARP responder is on, ARP requests will not traverse the network.
> >> >>* Problem is, the master router wouldn't necessarily be running on the
> >> >>  reported agent.
> >> >>  This means that traffic would not reach the master node but some
> >> >>arbitrary
> >> >>  node where the router master might be running, but might be in another
> >> >>  state (standby, fail).
> >> >>
> >> >>What is proposed?
> >> >>Basically the idea is not to do L2Pop for HA router ports that reside on
> >> >>the
> >> >>tenant network.
> >> >>Instead, we would create a tunnel to each node hosting the HA router so
> >> >>that
> >> >>the normal learning switch functionality would take care of switching
> >> >>the
> >> >>traffic to the master router.
> >> >
> >> > In Neutron we just ensure that the MAC address is unique per network.
> >> > Could a duplicate MAC address cause problems here?
> >>
> >> gary, AFAIU, from a Neutron POV, there is only one port, which is the
> >> router Port, which is plugged twice. One time per port.
> >> I think that the capacity to bind a port to several host is also a
> >> prerequisite for a clean solution here. This will be provided by
> >> patches to this bug :
> >> https://bugs.launchpad.net/neutron/+bug/1367391
> >>
> >>
> >> >>This way no matter where the master router is currently running, the
> >> >>data
> >> >>plane would know how to forward traffic to it.
> >> >>This solution requires changes on the controller only.
> >> >>
> >> >>What's to g

Re: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded

2014-12-20 Thread Timur Nurlygayanov
Hi Danny,

what about the global ports count and quotas?

On Sun, Dec 21, 2014 at 1:32 AM, Danny Choi (dannchoi) 
wrote:

>  Hi,
>
>  The default quota for port is 50.
>
>  +--++-+
>
> localadmin@qa4:~/devstack$ neutron quota-show --tenant-id
> 1b2e5efaeeeb46f2922849b483f09ec1
>
> +-+---+
>
> | Field   | Value |
>
> +-+---+
>
> | floatingip  | 50|
>
> | network | 10|
>
> | port| 50|< 50
>
> | router  | 10|
>
> | security_group  | 10|
>
> | security_group_rule | 100   |
>
> | subnet  | 10|
>
> +-+---+
>
>   Total number of ports used so far is 40.
>
>  localadmin@qa4:~/devstack$ nova list
>
>
> +--+--+++-+---+
>
> | ID   | Name
> | Status | Task State | Power State | Networks
> |
>
>
> +--+--+++-+---+
>
> | 595940bd-3fb1-4ad3-8cc0-29329b464471 | VM-1
> | ACTIVE | -  | Running | private_net30=30.0.0.44
> |
>
> | 192ce36d-bc76-427a-a374-1f8e8933938f | VM-2
> | ACTIVE | -  | Running | private_net30=30.0.0.45
> |
>
> | 10ad850e-ed9d-42d9-8743-b8eda4107edc |
> cirros--10ad850e-ed9d-42d9-8743-b8eda4107edc | ACTIVE | -  |
> Running | private_net20=20.0.0.38; private=10.0.0.52
> |
>
> | 18209b40-09e7-4718-b04f-40a01a8e5993 |
> cirros--18209b40-09e7-4718-b04f-40a01a8e5993 | ACTIVE | -  |
> Running | private_net20=20.0.0.40; private=10.0.0.54
> |
>
> | 1ededa1e-c820-4915-adf2-4be8eedaf012 |
> cirros--1ededa1e-c820-4915-adf2-4be8eedaf012 | ACTIVE | -  |
> Running | private_net20=20.0.0.41; private=10.0.0.55
> |
>
> | 3688262e-d00f-4263-91a7-785c40f4ae0f |
> cirros--3688262e-d00f-4263-91a7-785c40f4ae0f | ACTIVE | -  |
> Running | private_net20=20.0.0.34; private=10.0.0.49
> |
>
> | 4620663f-e6e0-4af2-84c0-6108279cbbed |
> cirros--4620663f-e6e0-4af2-84c0-6108279cbbed | ACTIVE | -  |
> Running | private_net20=20.0.0.37; private=10.0.0.51
> |
>
> | 8f8252a3-fa23-47fc-8b32-7f7328ecfba2 |
> cirros--8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | ACTIVE | -  |
> Running | private_net20=20.0.0.39; private=10.0.0.53
> |
>
> | a228f33b-0388-464e-af49-b55af9601f56 |
> cirros--a228f33b-0388-464e-af49-b55af9601f56 | ACTIVE | -  |
> Running | private_net20=20.0.0.42; private=10.0.0.56
> |
>
> | def5a255-0c9d-4df0-af02-3944bf5af2db |
> cirros--def5a255-0c9d-4df0-af02-3944bf5af2db | ACTIVE | -  |
> Running | private_net20=20.0.0.36; private=10.0.0.50
> |
>
> | e1470813-bf4c-4989-9a11-62da47a5c4b4 |
> cirros--e1470813-bf4c-4989-9a11-62da47a5c4b4 | ACTIVE | -  |
> Running | private_net20=20.0.0.33; private=10.0.0.48
> |
>
> | f63390fa-2169-45c0-bb02-e42633a08b8f |
> cirros--f63390fa-2169-45c0-bb02-e42633a08b8f | ACTIVE | -  |
> Running | private_net20=20.0.0.35; private=10.0.0.47
> |
>
> | 2c34956d-4bf9-45e5-a9de-84d3095ee719 |
> vm--2c34956d-4bf9-45e5-a9de-84d3095ee719 | ACTIVE | -  |
> Running | private_net30=30.0.0.39; private_net50=50.0.0.29;
> private_net40=40.0.0.29 |
>
> | 680c55f5-527b-49e3-847c-7794e1f8e7a8 |
> vm--680c55f5-527b-49e3-847c-7794e1f8e7a8 | ACTIVE | -  |
> Running | private_net30=30.0.0.41; private_net50=50.0.0.30;
> private_net40=40.0.0.31 |
>
> | ade4c14b-baf7-4e57-948e-095689f73ce3 |
> vm--ade4c14b-baf7-4e57-948e-095689f73ce3 | ACTIVE | -  |
> Running | private_net30=30.0.0.43; private_net50=50.0.0.32;
> private_net40=40.0.0.33 |
>
> | c91e426a-ed68-4659-89f6-df6d1154bb16 |
> vm--c91e426a-ed68-4659-89f6-df6d1154bb16 | ACTIVE | -  |
> Running | private_net30=30.0.0.42; private_net50=50.0.0.33;
> private_net40=40.0.0.32 |
>
> | cedd9984-79f0-46b3-897d-b301cfa74a1a |
> vm--cedd9984-79f0-46b3-897d-b301cfa74a1a | ACTIVE | -  |
> Running | private_net30=30.0.0.40; private_net50=50.0.0.31;
> private_net40=40.0.0.30 |
>
> | ec83e53f-556f-4e66-ab85-15a9e1ba9d28 |
> vm--ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | ACTIVE | -  |
> Running | private_net30=30.0.0.38; private_net50=50.0.0.28;
> private_net40=40.0.0.28 |
>
>
> +--+-

[openstack-dev] [neutron][FWaaS] No weekly IRC meetings on Dec 24th and 31st

2014-12-20 Thread Sumit Naiksatam
Hi, We will skip the meetings for the next two weeks since most team
members are not available to meet. Please continue to keep the
discussions going over the mailing and lists and the IRC channel.
Check back on the wiki page for the next meeting and agenda [1].

Thanks,
~Sumit.
[1] https://wiki.openstack.org/wiki/Meetings/FWaaS

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] ratio: created to attached

2014-12-20 Thread Tom Barron
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Does anyone have real world experience, even data, to speak to the
question: in an OpenStack cloud, what is the likely ratio of (created)
cinder volumes to attached cinder volumes?

Thanks,

Tom Barron
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBAgAGBQJUlgybAAoJEGeKBzAeUxEHqKwIAJjL5TCP7s+Ev8RNr+5bWARF
zy3I216qejKdlM+a9Vxkl6ZWHMklWEhpMmQiUDMvEitRSlHpIHyhh1RfZbl4W9Fe
GVXn04sXIuoNPgbFkkPIwE/45CJC1kGIBDub/pr9PmNv9mzAf3asLCHje8n3voWh
d30If5SlPiaVoc0QNrq0paK7Yl1hh5jLa2zeV4qu4teRts/GjySJI7bR0k/TW5n4
e2EKxf9MhbxzjQ6QsgvWzxmryVIKRSY9z8Eg/qt7AfXF4Kx++MNo8VbX3AuOu1XV
cnHlmuGqVq71uMjWXCeqK8HyAP8nkn2cKnJXhRYli6qSwf9LxzjC+kMLn364IX4=
=AZ0i
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded

2014-12-20 Thread Danny Choi (dannchoi)
Hi,

The default quota for port is 50.


+--++-+

localadmin@qa4:~/devstack$ neutron quota-show --tenant-id 
1b2e5efaeeeb46f2922849b483f09ec1

+-+---+

| Field   | Value |

+-+---+

| floatingip  | 50|

| network | 10|

| port| 50|< 50

| router  | 10|

| security_group  | 10|

| security_group_rule | 100   |

| subnet  | 10|

+-+---+

Total number of ports used so far is 40.


localadmin@qa4:~/devstack$ nova list

+--+--+++-+---+

| ID   | Name   
  | Status | Task State | Power State | Networks
  |

+--+--+++-+---+

| 595940bd-3fb1-4ad3-8cc0-29329b464471 | VM-1   
  | ACTIVE | -  | Running | private_net30=30.0.0.44 
  |

| 192ce36d-bc76-427a-a374-1f8e8933938f | VM-2   
  | ACTIVE | -  | Running | private_net30=30.0.0.45 
  |

| 10ad850e-ed9d-42d9-8743-b8eda4107edc | 
cirros--10ad850e-ed9d-42d9-8743-b8eda4107edc | ACTIVE | -  | Running
 | private_net20=20.0.0.38; private=10.0.0.52|

| 18209b40-09e7-4718-b04f-40a01a8e5993 | 
cirros--18209b40-09e7-4718-b04f-40a01a8e5993 | ACTIVE | -  | Running
 | private_net20=20.0.0.40; private=10.0.0.54|

| 1ededa1e-c820-4915-adf2-4be8eedaf012 | 
cirros--1ededa1e-c820-4915-adf2-4be8eedaf012 | ACTIVE | -  | Running
 | private_net20=20.0.0.41; private=10.0.0.55|

| 3688262e-d00f-4263-91a7-785c40f4ae0f | 
cirros--3688262e-d00f-4263-91a7-785c40f4ae0f | ACTIVE | -  | Running
 | private_net20=20.0.0.34; private=10.0.0.49|

| 4620663f-e6e0-4af2-84c0-6108279cbbed | 
cirros--4620663f-e6e0-4af2-84c0-6108279cbbed | ACTIVE | -  | Running
 | private_net20=20.0.0.37; private=10.0.0.51|

| 8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | 
cirros--8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | ACTIVE | -  | Running
 | private_net20=20.0.0.39; private=10.0.0.53|

| a228f33b-0388-464e-af49-b55af9601f56 | 
cirros--a228f33b-0388-464e-af49-b55af9601f56 | ACTIVE | -  | Running
 | private_net20=20.0.0.42; private=10.0.0.56|

| def5a255-0c9d-4df0-af02-3944bf5af2db | 
cirros--def5a255-0c9d-4df0-af02-3944bf5af2db | ACTIVE | -  | Running
 | private_net20=20.0.0.36; private=10.0.0.50|

| e1470813-bf4c-4989-9a11-62da47a5c4b4 | 
cirros--e1470813-bf4c-4989-9a11-62da47a5c4b4 | ACTIVE | -  | Running
 | private_net20=20.0.0.33; private=10.0.0.48|

| f63390fa-2169-45c0-bb02-e42633a08b8f | 
cirros--f63390fa-2169-45c0-bb02-e42633a08b8f | ACTIVE | -  | Running
 | private_net20=20.0.0.35; private=10.0.0.47|

| 2c34956d-4bf9-45e5-a9de-84d3095ee719 | 
vm--2c34956d-4bf9-45e5-a9de-84d3095ee719 | ACTIVE | -  | Running
 | private_net30=30.0.0.39; private_net50=50.0.0.29; private_net40=40.0.0.29 |

| 680c55f5-527b-49e3-847c-7794e1f8e7a8 | 
vm--680c55f5-527b-49e3-847c-7794e1f8e7a8 | ACTIVE | -  | Running
 | private_net30=30.0.0.41; private_net50=50.0.0.30; private_net40=40.0.0.31 |

| ade4c14b-baf7-4e57-948e-095689f73ce3 | 
vm--ade4c14b-baf7-4e57-948e-095689f73ce3 | ACTIVE | -  | Running
 | private_net30=30.0.0.43; private_net50=50.0.0.32; private_net40=40.0.0.33 |

| c91e426a-ed68-4659-89f6-df6d1154bb16 | 
vm--c91e426a-ed68-4659-89f6-df6d1154bb16 | ACTIVE | -  | Running
 | private_net30=30.0.0.42; private_net50=50.0.0.33; private_net40=40.0.0.32 |

| cedd9984-79f0-46b3-897d-b301cfa74a1a | 
vm--cedd9984-79f0-46b3-897d-b301cfa74a1a | ACTIVE | -  | Running
 | private_net30=30.0.0.40; private_net50=50.0.0.31; private_net40=40.0.0.30 |

| ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | 
vm--ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | ACTIVE | -  | Running
 | private_net30=30.0.0.38; private_net50=50.0.0.28; private_net40=40.0.0.28 |

+--+-

Re: [openstack-dev] [neutron][fwaas] neutron/agent/firewall.py

2014-12-20 Thread Miguel Ángel Ajo
Correct, this is for the security groups implementation

Miguel Ángel Ajo


On Friday, 19 de December de 2014 at 23:50, Sridar Kandaswamy (skandasw) wrote:

> +1 Mathieu. Paul, this is not related to FWaaS.
>  
> Thanks
>  
> Sridar
>  
> On 12/19/14, 2:23 PM, "Mathieu Gagné" http://web.com)> 
> wrote:
>  
> > On 2014-12-19 5:16 PM, Paul Michali (pcm) wrote:
> > >  
> > > This has a FirewallDriver and NoopFirewallDriver. Should this be moved
> > > into the neutron_fwaas repo?
> > >  
> >  
> >  
> > AFAIK, FirewallDriver is used to implement SecurityGroup:
> >  
> > See:
> > -  
> > https://github.com/openstack/neutron/blob/master/neutron/agent/firewall.py
> > #L26-L29
> > -  
> > https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptab
> > les_firewall.py#L45
> > -  
> > https://github.com/openstack/neutron/blob/master/neutron/plugins/hyperv/ag
> > ent/security_groups_driver.py#L25
> >  
> > This class looks to not be used by neutron-fwaas
> >  
> > --  
> > Mathieu
> >  
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >  
>  
>  
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2014-12-20 Thread Richard Jones
This is a good proposal, though I'm unclear on how the static_settings.py
file is populated by a developer (as opposed to a packager, which you
described).


 Richard

On Fri Dec 19 2014 at 12:59:37 AM Radomir Dopieralski <
openst...@sheep.art.pl> wrote:

> Hello,
>
> revisiting the package management for the Horizon's static files again,
> I would like to propose a particular solution. Hopefully it will allow
> us to both simplify the whole setup, and use the popular tools for the
> job, without losing too much of benefits of our current process.
>
> The changes we would need to make are as follows:
>
> * get rid of XStatic entirely;
> * add to the repository a configuration file for Bower, with all the
> required bower packages listed and their versions specified;
> * add to the repository a static_settings.py file, with a single
> variable defined, STATICFILES_DIRS. That variable would be initialized
> to a list of pairs mapping filesystem directories to URLs within the
> /static tree. By default it would only have a single mapping, pointing
> to where Bower installs all the stuff by default;
> * add a line "from static_settings import STATICFILES_DIRS" to the
> settings.py file;
> * add jobs both to run_tests.sh and any gate scripts, that would run Bower;
> * add a check on the gate that makes sure that all direct and indirect
> dependencies of all required Bower packages are listed in its
> configuration files (pretty much what we have for requirements.txt now);
>
> That's all. Now, how that would be used.
>
> 1. The developers will just use Bower the way they would normally use
> it, being able to install and test any of the libraries in any versions
> they like. The only additional thing is that they would need to add any
> additional libraries or changed versions to the Bower configuration file
> before they push their patch for review and merge.
>
> 2. The packagers can read the list of all required packages from the
> Bower configuration file, and make sure they have all the required
> libraries packages in the required versions.
>
> Next, they replace the static_settings.py file with one they have
> prepared manually or automatically. The file lists the locations of all
> the library directories, and, in the case when the directory structure
> differs from what Bower provides, even mappings between subdirectories
> and individual files.
>
> 3. Security patches need to go into the Bower packages directly, which
> is good for the whole community.
>
> 4. If we aver need a library that is not packaged for Bower, we will
> package it just as we had with the XStatic packages, only for Bower,
> which has much larger user base and more chance of other projects also
> using that package and helping with its testing.
>
> What do you think? Do you see any disastrous problems with this system?
> --
> Radomir Dopieralski
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Relocation of freshly deployed OpenStack by Fuel

2014-12-20 Thread Skowron, Pawel
-Need a little guidance with Mirantis version of OpenStack.

We want move freshly deployed cloud, without running instances but with HA 
option to other physical location.
The other location means different ranges of public network. And I really want 
move my installation without cloud redeployment.

What I think is required to change is public network settings. The public 
network settings can be divided in two different areas:
1) Floating ip range for external access to running VM instances
2) Fuel reserved pool for service endpoints (virtual ips and staticly assigned 
ips)

The first one 1) I believe but I haven't tested that _is not a problem_ but any 
insight will be invaluable.
I think it would be possible change to floating network ranges, as an admin in 
OpenStack itself. I will just add another "network" as external network.

But the second issue 2) is I am worried about. What I found the virtual ips 
(vip) are assigned to one of controller (primary role of HA)
and written in haproxy/pacemaker configuration. To allow access from public 
network by this ips I would probably need
to reconfigure all HA support services which have hardcoded vips in its 
configuration files, but it looks very complicated and fragile.

I have even found that public_vip is used in nova.conf (to get access to 
glance). So the relocation will require reconfiguration of nova and maybe other 
openstack services.
In the case of KeyStone it would be a real problem (ips are stored in database).

Has someone any experience with this kind of scenario and would be kind to 
share it ? Please help.

I have used Fuel 6.0 technical preview.

Pawel Skowron
pawel.skow...@intel.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [driver] DB operations

2014-12-20 Thread Amit Das
Got it Duncan.

I will re-check if I can arrive at any solution without accessing the
database.

Regards,
Amit
*CloudByte Inc.* 

On Sat, Dec 20, 2014 at 7:35 PM, Duncan Thomas 
wrote:

> No, I mean that if drivers are going to access database, then they should
> do it via a defined interface that limits what they can do to a sane set of
> operations. I'd still prefer that they didn't need extra access beyond the
> model update, but I don't know if that is possible.
>
> Duncan Thomas
> On Dec 19, 2014 6:43 PM, "Amit Das"  wrote:
>
>> Thanks Duncan.
>> Do you mean hepler methods in the specific driver class?
>> On 19 Dec 2014 14:51, "Duncan Thomas"  wrote:
>>
>>> So our general advice has historical been 'drivers should not be
>>> accessing the db directly'. I haven't had chance to look at your driver
>>> code yet, I've been on vacation, but my suggestion is that if you
>>> absolutely must store something in the admin metadata rather than somewhere
>>> that is covered by the model update (generally provider location and
>>> provider auth) then writing some helper methods that wrap the context bump
>>> and db call would be better than accessing it directly from the driver.
>>>
>>> Duncan Thomas
>>> On Dec 18, 2014 11:41 PM, "Amit Das"  wrote:
>>>
 Hi Stackers,

 I have been developing a Cinder driver for CloudByte storage and have
 come across some scenarios where the driver needs to do create, read &
 update operations on cinder database (volume_admin_metadata table). This is
 required to establish a mapping between OpenStack IDs with the backend
 storage IDs.

 Now, I have got some review comments w.r.t the usage of DB related
 operations esp. w.r.t raising the context to admin.

 In short, it has been advised not to use "*context.get_admin_context()*
 ".


 https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py

 However, i get errors trying to use the default context as shown below:

 *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher   File
 "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in
 is_admin_context*
 *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return
 context.is_admin*
 *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher
 AttributeError: 'module' object has no attribute 'is_admin'*

 So what is the proper way to run these DB operations from within a
 driver ?


 Regards,
 Amit
 *CloudByte Inc.* 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [driver] DB operations

2014-12-20 Thread Duncan Thomas
No, I mean that if drivers are going to access database, then they should
do it via a defined interface that limits what they can do to a sane set of
operations. I'd still prefer that they didn't need extra access beyond the
model update, but I don't know if that is possible.

Duncan Thomas
On Dec 19, 2014 6:43 PM, "Amit Das"  wrote:

> Thanks Duncan.
> Do you mean hepler methods in the specific driver class?
> On 19 Dec 2014 14:51, "Duncan Thomas"  wrote:
>
>> So our general advice has historical been 'drivers should not be
>> accessing the db directly'. I haven't had chance to look at your driver
>> code yet, I've been on vacation, but my suggestion is that if you
>> absolutely must store something in the admin metadata rather than somewhere
>> that is covered by the model update (generally provider location and
>> provider auth) then writing some helper methods that wrap the context bump
>> and db call would be better than accessing it directly from the driver.
>>
>> Duncan Thomas
>> On Dec 18, 2014 11:41 PM, "Amit Das"  wrote:
>>
>>> Hi Stackers,
>>>
>>> I have been developing a Cinder driver for CloudByte storage and have
>>> come across some scenarios where the driver needs to do create, read &
>>> update operations on cinder database (volume_admin_metadata table). This is
>>> required to establish a mapping between OpenStack IDs with the backend
>>> storage IDs.
>>>
>>> Now, I have got some review comments w.r.t the usage of DB related
>>> operations esp. w.r.t raising the context to admin.
>>>
>>> In short, it has been advised not to use "*context.get_admin_context()*
>>> ".
>>>
>>>
>>> https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py
>>>
>>> However, i get errors trying to use the default context as shown below:
>>>
>>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher   File
>>> "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in
>>> is_admin_context*
>>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return
>>> context.is_admin*
>>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher
>>> AttributeError: 'module' object has no attribute 'is_admin'*
>>>
>>> So what is the proper way to run these DB operations from within a
>>> driver ?
>>>
>>>
>>> Regards,
>>> Amit
>>> *CloudByte Inc.* 
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev