Re: [openstack-dev] [Ironic] NoFreeConductorWorker going away with move to Futurist?

2015-07-22 Thread Ruby Loo
On 22 July 2015 at 08:40, Dmitry Tantsur dtant...@redhat.com wrote:

 Hi all!

 Currently _spawn_worker in the conductor manager raises
 NoFreeConductorWorker if pool is already full. That's not very user
 friendly (potential source of retries in client) and does not map well on
 common async worker patterns.

 My understanding is that it was done to prevent the conductor thread from
 waiting on pool to become free. If this is true, we no longer need it after
 switch to Futurist, as Futurist maintains internal queue for its green
 executor, just like thread and process executors in stdlib do. Instead of
 blocking the conductor the request will be queued, and a user won't have to
 retry vague (and rare!) HTTP 503 error.

 WDYT about me dropping this exception with move to Futurist?


For a bit more context, Dmitry has a spec to move to Futurist[1]. My
understanding is that Futurist will enqueue tasks if the worker pool is
full. So if we move to Futurist, we will have to drop the exception/change
the existing behaviour, unless we want Futurist to provide something so we
can continue with our existing behaviour.

I am fine with changing the behaviour so that tasks are enqueued and we
don't raise that NoFreeConductorWorker exception. That seems to make sense
to me. But I'm not an operator.

--ruby

[1]
http://specs.openstack.org/openstack/ironic-specs/specs/liberty/futurist.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking to Neutron

2015-07-22 Thread Gal Sagie
Hello Everyone,

Project Kuryr is now officially part of Neutron's big tent.
Kuryr is aimed to be used as a generic docker remote driver that connects
docker to Neutron API's
and provide containerised images for the common Neutron plugins.
We also plan on providing common additional networking services API's from
other sub projects
in the Neutron big tent.

We hope to get everyone on board with this project and leverage this joint
effort for the many different solutions out there (instead of everyone
re-inventing the wheel for each different project).

We want to start doing a weekly IRC meeting to coordinate the different
requierments and
tasks, so anyone that is interested to participate please share your time
preference
and we will try to find the best time for the majority.

Remember we have people in Europe, Tokyo and US, so we won't be able to
find time that fits
everyone.

The currently proposed time is  *Wedensday at 16:00 UTC *

Please reply with your suggested time/day,
Hope to see you all, we have an interesting and important project ahead of
us

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-22 Thread Carl Baldwin
It has been a week since this nomination.  I'm pleased to confirm
Cedric as a core reviewer for these areas of focus.  We look forward
to your continued contribution to the project!

Carl

On Wed, Jul 15, 2015 at 12:47 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 As the Neutron L3 Lieutenant along with Kevin Benton for control
 plane, and Assaf Muller for testing, I would like to propose Cedric
 Brandily as a member of the Neutron core reviewer team under these
 areas of focus.

 Cedric has been a long time contributor to Neutron showing expertise
 particularly in these areas.  His knowledge and involvement will be
 very important to the project.  He is a trusted member of our
 community.  He has been reviewing consistently [1][2] and community
 feedback that I've received indicates that he is a solid reviewer.

 Existing Neutron core reviewers from these areas of focus, please vote
 +1/-1 for the addition of Cedric to the team.

 Thanks!
 Carl Baldwin

 [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
 [2] http://stackalytics.com/report/contribution/neutron-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-22 Thread Edgar Magana
Congratulations Cedric!!

Edgar




On 7/22/15, 8:37 AM, Carl Baldwin c...@ecbaldwin.net wrote:

It has been a week since this nomination.  I'm pleased to confirm
Cedric as a core reviewer for these areas of focus.  We look forward
to your continued contribution to the project!

Carl

On Wed, Jul 15, 2015 at 12:47 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 As the Neutron L3 Lieutenant along with Kevin Benton for control
 plane, and Assaf Muller for testing, I would like to propose Cedric
 Brandily as a member of the Neutron core reviewer team under these
 areas of focus.

 Cedric has been a long time contributor to Neutron showing expertise
 particularly in these areas.  His knowledge and involvement will be
 very important to the project.  He is a trusted member of our
 community.  He has been reviewing consistently [1][2] and community
 feedback that I've received indicates that he is a solid reviewer.

 Existing Neutron core reviewers from these areas of focus, please vote
 +1/-1 for the addition of Cedric to the team.

 Thanks!
 Carl Baldwin

 [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
 [2] http://stackalytics.com/report/contribution/neutron-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Container Networking Spec

2015-07-22 Thread Daneyon Hansen (danehans)
All,

I have just submitted the container networking spec[1] for review. Thank you to 
everyone [2-3] who participated in contributing to the spec. If you are 
interested in container networking within Magnum, I urge you to review the spec 
and provide your feedback.

[1] https://review.openstack.org/#/c/204686
[2] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-06-25-18.00.html
[3] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-07-16-18.03.html

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-22 Thread Vikram Choudhary
Congrats Credic!

On Wed, Jul 22, 2015 at 9:28 PM, Edgar Magana edgar.mag...@workday.com
wrote:

 Congratulations Cedric!!

 Edgar




 On 7/22/15, 8:37 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It has been a week since this nomination.  I'm pleased to confirm
 Cedric as a core reviewer for these areas of focus.  We look forward
 to your continued contribution to the project!
 
 Carl
 
 On Wed, Jul 15, 2015 at 12:47 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:
  As the Neutron L3 Lieutenant along with Kevin Benton for control
  plane, and Assaf Muller for testing, I would like to propose Cedric
  Brandily as a member of the Neutron core reviewer team under these
  areas of focus.
 
  Cedric has been a long time contributor to Neutron showing expertise
  particularly in these areas.  His knowledge and involvement will be
  very important to the project.  He is a trusted member of our
  community.  He has been reviewing consistently [1][2] and community
  feedback that I've received indicates that he is a solid reviewer.
 
  Existing Neutron core reviewers from these areas of focus, please vote
  +1/-1 for the addition of Cedric to the team.
 
  Thanks!
  Carl Baldwin
 
  [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
  [2] http://stackalytics.com/report/contribution/neutron-group/90
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL feature status

2015-07-22 Thread Sergii Golovatiuk
Sheena,

We may turn off SSL right before the release. However, to test it better, I
would turn it on. In this case every CI run will be helping to test it
better.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Jul 22, 2015 at 5:51 AM, Sheena Gregson sgreg...@mirantis.com
wrote:

 I believe the last time we discussed this, the majority of people were in
 favor of enabling SSL by default for all public endpoints, which would be
 my recommendation.



 As a reminder, this will mean that users will see a certificate warning
 the first time they access the Fuel UI.  We should document this as a known
 user experience and provide instructions for users to swap out the
 self-signed certificates that are enabled by default for their own internal
 CA certificates/3rd party certificates.



 *From:* Mike Scherbakov [mailto:mscherba...@mirantis.com]
 *Sent:* Wednesday, July 22, 2015 1:12 AM
 *To:* Stanislaw Bogatkin; Sheena Gregson
 *Cc:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [Fuel] SSL feature status



 Thanks Stas. My opinion is that it has to be enabled by default. I'd like
 product management to shine in here. Sheena?





 On Tue, Jul 21, 2015 at 11:06 PM Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi,



 we have a spec that says we disable SSL by default and it was successfully
 merged with that, no one was against such decision. So, if we want to
 enable it by default now - we can. It should be done as a part of our usual
 process, I think - I'll create a bug for it and fix it.



 Current status of feature is next:

 1. All codebase for SSL is merged

 2. Some tests for it writing my QA - not all of them are done yet.



 I'll update blueprints as soon as possible. Sorry for inconvenience.



 On Mon, Jul 20, 2015 at 8:44 PM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Hi guys,

 did we enable SSL for Fuel Master node and OpenStack REST API endpoints by
 default? If not, let's enable it by default. I don't know why we should not.



 Looks like we need to update blueprints as well [1], [2], as they don't
 seem to reflect current status of the feature.



 [1] https://blueprints.launchpad.net/fuel/+spec/ssl-endpoints

 [2] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints



 Stas, as you've been working on it, can you please provide current status?



 Thanks,



 --

 Mike Scherbakov
 #mihgen



 --

 Mike Scherbakov
 #mihgen

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread Neil Jerram
On 21/07/15 15:45, Salvatore Orlando wrote:
 On 21 July 2015 at 14:21, Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:


 You've probably seen Robert Kukura's comment on the related bug at
 https://bugs.launchpad.net/neutron/+bug/1458890/comments/30, and there
 is a useful detailed description of how the multiprovider extension
 works at
 https://bugs.launchpad.net/openstack-api-site/+bug/1242019/comments/3.
 I believe it is correct to say that using multiprovider would be an
 effective substitute for using multiple backing networks with
 different
 {network_type, physical_network, segmentation_id}, and that logically
 multiprovider is aiming to describe the same thing as this email
 thread
 is, i.e. non-overlay mapping onto a physical network composed of
 multiple segments.


 However, I believe multiprovider does not (per se) address the IP
 addressing requirement(s) of the multi-segment scenario.


 Indeed it does not. The multiprovider extension simply indicates that
 a network can be built using different L2 segments.
 It is then up to the operator to ensure that these segments are
 correct, and it's up to whatever is running in the backend to ensure
 that instances on the various segments can communicate each other.

It seems to me that the existence of the multiprovider extension is an
important point for this discussion.  Multiprovider, as I understand it,
allows describing a network composed of multiple L2 segments with
implicit east-west routing between them.  Which is a large part
(although not all) of the requirement that the current discussion is
trying to meet.  Given that multiprovider already exists - and assuming
that it is generally accepted and approved of - surely we should not add
a competing model for the same thing, but should instead look at adding
any additional features to multiprovider that are needed to meet the
complete set of requirements?


 I believe the ask here is for Neutron to provide this capability (the
 neutron reference control plane currently doesn't).

What exactly do you mean here?  I don't think the operators are asking
for a software implementation of the implicit east-west routing, for
example, because they already have physical kit that does that. 
(Although perhaps that kit might need some kind of instruction.)  For
the DHCP function (as I think you've written elsewhere) all that's
needed is to run a DHCP agent in each segment.

I don't know if the operators were aware of multiprovider - AFAIK, it
wasn't mentioned as a possible ingredient for this work, before Robert
Kukura's comment cited above - but if they were, I think they might just
be asking for the IP addressing points on top of that; both
segment-based and mobile.


 It is not yet entirely clear to me whether there's a real need of
 changing the logical model, but IP addressing implications might be a
 reason, as pointed out by Neil.
  


 
  This proposal offers a clear separation between the statically bound
  and the mobile address blocks by associating the former with the
  backing networks and the latter with the front network.  The mobile
  addresses are modeled just like floating IPs are today but are
  implemented by some plugin code (possibly without NAT).

 Couldn't the mobile addresses be _exactly_ like floating IPs already
 are?  Why is anything different from floating IPs needed here?

 
  This proposal also provides some advantages for integrating dynamic
  routing.  Since each backing network will, by necessity, have a
  corresponding router in the infrastructure, the relationship between
  dynamic routing speaker, router, and network is clear in the model:
  network - speaker - router.


 Ok. But how that changes because of backing networks? I believe the
 same relationship holds true for every network, or am I wrong?

Even if not for every network, it's presumably also an interesting
question for multiprovider networks.  (And so there may already be an
answer!)

  


 I'm not sure exactly what you mean here by 'dynamic routing', but I
 think this touches on a key point: can IP routing happen anywhere in a
 Neutron network, without being explicitly represented by a router
 object
 in the model?

 I think the answer to that should be yes.  


 But this would also mean that we should consider doing without the
 very concept of router in Neutron.
 If we look at the scenarios we're describing here, I'd agree with you,
 but unfortunately Neutron is required to serve a wide variety of
 scenarios.

I didn't mean that there should _never_ be explicit router objects in
Neutron.  Clearly those already exist, for good reasons, and should
continue to serve their scenarios.

But I was suggesting that there might be some scenarios where routing
happened without a corresponding router object, and in fact it appears
that this is 

Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-22 Thread Miguel Lavalle
Congrats Cedric!

On Wed, Jul 22, 2015 at 10:37 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It has been a week since this nomination.  I'm pleased to confirm
 Cedric as a core reviewer for these areas of focus.  We look forward
 to your continued contribution to the project!

 Carl

 On Wed, Jul 15, 2015 at 12:47 PM, Carl Baldwin c...@ecbaldwin.net wrote:
  As the Neutron L3 Lieutenant along with Kevin Benton for control
  plane, and Assaf Muller for testing, I would like to propose Cedric
  Brandily as a member of the Neutron core reviewer team under these
  areas of focus.
 
  Cedric has been a long time contributor to Neutron showing expertise
  particularly in these areas.  His knowledge and involvement will be
  very important to the project.  He is a trusted member of our
  community.  He has been reviewing consistently [1][2] and community
  feedback that I've received indicates that he is a solid reviewer.
 
  Existing Neutron core reviewers from these areas of focus, please vote
  +1/-1 for the addition of Cedric to the team.
 
  Thanks!
  Carl Baldwin
 
  [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
  [2] http://stackalytics.com/report/contribution/neutron-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-22 Thread Damon Wang
Congrats Credic!

2015-07-23 0:05 GMT+08:00 Vikram Choudhary viks...@gmail.com:

 Congrats Credic!

 On Wed, Jul 22, 2015 at 9:28 PM, Edgar Magana edgar.mag...@workday.com
 wrote:

 Congratulations Cedric!!

 Edgar




 On 7/22/15, 8:37 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It has been a week since this nomination.  I'm pleased to confirm
 Cedric as a core reviewer for these areas of focus.  We look forward
 to your continued contribution to the project!
 
 Carl
 
 On Wed, Jul 15, 2015 at 12:47 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:
  As the Neutron L3 Lieutenant along with Kevin Benton for control
  plane, and Assaf Muller for testing, I would like to propose Cedric
  Brandily as a member of the Neutron core reviewer team under these
  areas of focus.
 
  Cedric has been a long time contributor to Neutron showing expertise
  particularly in these areas.  His knowledge and involvement will be
  very important to the project.  He is a trusted member of our
  community.  He has been reviewing consistently [1][2] and community
  feedback that I've received indicates that he is a solid reviewer.
 
  Existing Neutron core reviewers from these areas of focus, please vote
  +1/-1 for the addition of Cedric to the team.
 
  Thanks!
  Carl Baldwin
 
  [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
  [2] http://stackalytics.com/report/contribution/neutron-group/90
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking to Neutron

2015-07-22 Thread Ryan Moats

The alternative is to do what has been done with the neutron meeting itself
- have it be held in two alternating time slots.

Ryan

Damon Wang damon.dev...@gmail.com wrote on 07/22/2015 12:08:03 PM:

 From: Damon Wang damon.dev...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: Eran Gampel eran.gam...@toganetworks.com, Antoni Segura
 Puimedon t...@midokura.com, Irena Berezovsky ir...@midokura.com
 Date: 07/22/2015 12:08 PM
 Subject: Re: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers
 networking to Neutron

 I'd like to recommend a tool:
 http://www.timeanddate.com/worldclock/meeting.html

 Since I base in Beijing, so I'd like to recommend UTC 1:00 to UTC 16:00

 Regards,
 Wei Wang

 2015-07-23 0:28 GMT+08:00 Gal Sagie gal.sa...@gmail.com:

 Hello Everyone,

 Project Kuryr is now officially part of Neutron's big tent.
 Kuryr is aimed to be used as a generic docker remote driver that
 connects docker to Neutron API's
 and provide containerised images for the common Neutron plugins.
 We also plan on providing common additional networking services
 API's from other sub projects
 in the Neutron big tent.

 We hope to get everyone on board with this project and leverage this
 joint effort for the many different solutions out there (instead of
 everyone re-inventing the wheel for each different project).

 We want to start doing a weekly IRC meeting to coordinate the
 different requierments and
 tasks, so anyone that is interested to participate please share your
 time preference
 and we will try to find the best time for the majority.

 Remember we have people in Europe, Tokyo and US, so we won't be able
 to find time that fits
 everyone.

 The currently proposed time is  Wedensday at 16:00 UTC

 Please reply with your suggested time/day,
 Hope to see you all, we have an interesting and important project ahead
of us

 Thanks
 Gal.


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking to Neutron

2015-07-22 Thread Damon Wang
I'd like to recommend a tool:
http://www.timeanddate.com/worldclock/meeting.html

Since I base in Beijing, so I'd like to recommend UTC 1:00 to UTC 16:00

Regards,
Wei Wang

2015-07-23 0:28 GMT+08:00 Gal Sagie gal.sa...@gmail.com:


 Hello Everyone,

 Project Kuryr is now officially part of Neutron's big tent.
 Kuryr is aimed to be used as a generic docker remote driver that connects
 docker to Neutron API's
 and provide containerised images for the common Neutron plugins.
 We also plan on providing common additional networking services API's from
 other sub projects
 in the Neutron big tent.

 We hope to get everyone on board with this project and leverage this joint
 effort for the many different solutions out there (instead of everyone
 re-inventing the wheel for each different project).

 We want to start doing a weekly IRC meeting to coordinate the different
 requierments and
 tasks, so anyone that is interested to participate please share your time
 preference
 and we will try to find the best time for the majority.

 Remember we have people in Europe, Tokyo and US, so we won't be able to
 find time that fits
 everyone.

 The currently proposed time is  *Wedensday at 16:00 UTC *

 Please reply with your suggested time/day,
 Hope to see you all, we have an interesting and important project ahead of
 us

 Thanks
 Gal.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][barbican] Setting a live debug session time

2015-07-22 Thread Ade Lee
On Tue, 2015-07-21 at 23:42 -0400, Ade Lee wrote:
 So, as discussed on #irc, I plan to:
 
 1. Check with folks who are running in a devstack environment as to
 where/how their barbican.conf file is configured.
 
 2. Will keep you updated as to the progress of dogtag packaging in
 Ubuntu/Debian.  Currently, there are a couple of bugs due to changes in
 tomcat.  These should be resolved this week - and Dogtag will be back in
 sid.
 
 3.  Will send you script that is used to configure Barbican with Dogtag
 in the Barbican Dogtag gate.

You can see the Dogtag barbican script in this CR ..
https://review.openstack.org/#/c/202146/15/contrib/devstack/lib/barbican,cm

Essentially, the steps are :
-- install dogtag packages
-- run pkispawn a couple of times to create the ca and kra
-- copy some files to /etc/barbican
-- modify barbican.conf

Also, the bug in the snakeoil plugin has been fixed -- 
https://review.openstack.org/#/c/204704/

so no need to patch it.

Ade
 
 Ade
 
 On Tue, 2015-07-21 at 09:05 +0900, Madhuri wrote:
  Hi Alee,
  
  Thank you for showing up for help.
  
  The proposed timing suits me. It would be 10:30 am JST for me.
  
  I am madhuri on #freenode.
  Will we be discussing on #openstack-containers?
  
  Sdake,
  Thank you for setting up this.
  
  Regards,
  Madhuri
  
  
  On Mon, Jul 20, 2015 at 11:26 PM, Ade Lee a...@redhat.com wrote:
  Madhuri,
  
  I understand that you are somewhere in APAC.  Perhaps it would
  be best
  to set up a debugging session on Tuesday night  -- at 9:30 pm
  EST
  
  This would correspond to 01:30:00 a.m. GMT (Wednesday), which
  should
  correspond to sometime in the morning for you.
  
  We can start with the initial goal of getting the snake oil
  plugin
  working for you, and then see where things are going wrong in
  the Dogtag
  install.
  
  Will that work for you?  What is your IRC nick?
  Ade
  
  (ps. I am alee on #freenode and can be found on either
  #openstack-barbican or #dogtag-pki)
  
   01:30:00 a.m. Tuesday July 21, 2015 in GMT
  On Fri, 2015-07-17 at 14:39 +, Steven Dake (stdake) wrote:
   Madhuri,
  
  
   Alee is in EST timezone (gmt-5 IIRC).  Alee will help you
  get barbican
   rolling.  Can you two folks set up a time to chat on irc on
  Monday or
   tuesday?
  
  
   Thanks
   -steve
  
  
  
  
  
  
  
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] Monasca mid-cycle meetup

2015-07-22 Thread Hochmuth, Roland M
The Monasca mid-cycle meet up will be held at the HP campus in Fort Collins, CO 
from August 5-6. Further details on the location, time and tentative agenda can 
be found at

https://etherpad.openstack.org/p/monasca_liberty_mid_cycle

Regards --Roland

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] The state of collaboration: 5 weeks

2015-07-22 Thread Zane Bitter

On 21/07/15 04:21, Dmitry Borodaenko wrote:

I -2 for this reason: this patch already had a +2 so it was closed to be
merged by someone else, while the patch was *breaking backward
compatiblity* in puppet-horizon, because Vasyl was changing the default
cache driver to force users to use Memcached by default.
We *never* break backward compatibility (this is a general rule in
OpenStack) and *never* change OpenStack default configuration (general
rule in Configuration management), but rather provide the interface
(called Puppet parameters) to change it.


That's fair enough, and on July 8 we pushed patch set 10 that has fixed
this problem, and set the default to LocMemCache. AFACT at this point
you've lost track of this commit because your gerrit dashboard excludes
all commits that have a -2 vote on them, so it got stuck in limbo for
two weeks.

I still disagree with your use of -2 vote here, a -1 from a core
reviewer should be enough to prevent other cores from merging a patch
set, and should tell them what to watch out for in subsequent patch sets.


This is actually a wider problem across OpenStack - core reviewers in 
different projects have different ideas about what -2 means.


For me, it's a signal to the submitter that there is no way the patch 
could ever be merged in its current form and that they should not bother 
to invest any further effort in it unless there is a radical change in 
approach. That's certainly the way it is used in heat-core. (There's 
also a procedural version, where it's just that the patch certainly 
won't be merged before an imminent feature freeze, but the -2 will be 
removed after that.) It's also the case that most tooling is set up for 
(e.g. -2'd patches drop off everyone's review radar).


At the other end of the spectrum, I've heard that core reviewers in some 
projects (*cough*nova*cough*) will routinely -2 a patch without having 
even read it just to be sure that it cannot get merged without their 
personal approval. Propriety prevents me from sharing what I think of 
this practice here.


The case in question lies somewhere in the middle, but IMHO a -1 would 
have been perfectly adequate, and I find it extremely improbable that 
another core reviewer would have merged the patch over that objection. 
If you're really worried though, the appropriate thing is to set the 
Approved- (WIP) flag. This sends a clear signal that the patch is not 
ready to merge (this may even be enforced by Gerrit, not 100% sure), but 
is also not sticky so it will drop off on the next patchset. This avoids 
all of the problems with the patch going into review purgatory and being 
blocked by the reviewer being on vacation.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] token revocation woes

2015-07-22 Thread Matt Fischer
Dolph,

Per our IRC discussion, I was unable to see any performance improvement
here although not calling DELETE so often will reduce the number of
deadlocks when we're under heavy load especially given the globally
replicated DB we use.



On Tue, Jul 21, 2015 at 5:26 PM, Dolph Mathews dolph.math...@gmail.com
wrote:

 Well, you might be in luck! Morgan Fainberg actually implemented an
 improvement that was apparently documented by Adam Young way back in March:

   https://bugs.launchpad.net/keystone/+bug/1287757

 There's a link to the stable/kilo backport in comment #2 - I'd be eager to
 hear how it performs for you!

 On Tue, Jul 21, 2015 at 5:58 PM, Matt Fischer m...@mattfischer.com
 wrote:

 Dolph,

 Excuse the delayed reply, was waiting for a brilliant solution from
 someone. Without one, personally I'd prefer the cronjob as it seems to be
 the type of thing cron was designed for. That will be a painful change as
 people now rely on this behavior so I don't know if its feasible. I will be
 setting up monitoring for the revocation count and alerting me if it
 crosses probably 500 or so. If the problem gets worse then I think a custom
 no-op or sql driver is the next step.

 Thanks.


 On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:



 On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer m...@mattfischer.com
 wrote:

 I'm having some issues with keystone revocation events. The bottom line
 is that due to the way keystone handles the clean-up of these events[1],
 having more than a few leads to:

  - bad performance, up to 2x slower token validation with about 600
 events based on my perf measurements.
  - database deadlocks, which cause API calls to fail, more likely with
 more events it seems

 I am seeing this behavior in code from trunk on June 11 using Fernet
 tokens, but the token backend does not seem to make a difference.

 Here's what happens to the db in terms of deadlock:
 2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock:
 (OperationalError) (1213, 'Deadlock found when trying to get lock; try
 restarting transaction') 'DELETE FROM revocation_event WHERE
 revocation_event.revoked_at  %s' (datetime.datetime(2015, 7, 15, 18, 55,
 41, 55186),)

 When this starts happening, I just go truncate the table, but this is
 not ideal. If [1] is really true then the design is not great, it sounds
 like keystone is doing a revocation event clean-up on every token
 validation call. Reading and deleting/locking from my db cluster is not
 something I want to do on every validate call.


 Unfortunately, that's *exactly* what keystone is doing. Adam and I had a
 conversation about this problem in Vancouver which directly resulted in
 opening the bug referenced on the operator list:

   https://bugs.launchpad.net/keystone/+bug/1456797

 Neither of us remembered the actual implemented behavior, which is what
 you've run into and Deepti verified in the bug's comments.



 So, can I turn of token revocation for now? I didn't see an obvious
 no-op driver.


 Not sure how, other than writing your own no-op driver, or perhaps an
 extended driver that doesn't try to clean the table on every read?


 And in the long-run can this be fixed? I'd rather do almost anything
 else, including writing a cronjob than what happens now.


 If anyone has a better solution than the current one, that's also better
 than requiring a cron job on something like keystone-manage
 revocation_flush I'd love to hear it.


 [1] -
 http://lists.openstack.org/pipermail/openstack-operators/2015-June/007210.html


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr][kol - Bringing Dockers networking to Neutron

2015-07-22 Thread Fox, Kevin M
Docker containerization of


From: Gal Sagie
Sent: Wednesday, July 22, 2015 9:28:50 AM
To: OpenStack Development Mailing List (not for usage questions); Eran Gampel; 
Antoni Segura Puimedon; Irena Berezovsky
Subject: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking to 
Neutron


Hello Everyone,

Project Kuryr is now officially part of Neutron's big tent.
Kuryr is aimed to be used as a generic docker remote driver that connects 
docker to Neutron API's
and provide containerised images for the common Neutron plugins.
We also plan on providing common additional networking services API's from 
other sub projects
in the Neutron big tent.

We hope to get everyone on board with this project and leverage this joint 
effort for the many different solutions out there (instead of everyone 
re-inventing the wheel for each different project).

We want to start doing a weekly IRC meeting to coordinate the different 
requierments and
tasks, so anyone that is interested to participate please share your time 
preference
and we will try to find the best time for the majority.

Remember we have people in Europe, Tokyo and US, so we won't be able to find 
time that fits
everyone.

The currently proposed time is  Wedensday at 16:00 UTC

Please reply with your suggested time/day,
Hope to see you all, we have an interesting and important project ahead of us

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] token revocation woes

2015-07-22 Thread Morgan Fainberg
This is an indicator that the bottleneck is not the db strictly speaking, but 
also related to the way we match. This means we need to spend some serious 
cycles on improving both the stored record(s) for revocation events and the 
matching algorithm. 

Sent via mobile

 On Jul 22, 2015, at 11:51, Matt Fischer m...@mattfischer.com wrote:
 
 Dolph,
 
 Per our IRC discussion, I was unable to see any performance improvement here 
 although not calling DELETE so often will reduce the number of deadlocks when 
 we're under heavy load especially given the globally replicated DB we use.
 
 
 
 On Tue, Jul 21, 2015 at 5:26 PM, Dolph Mathews dolph.math...@gmail.com 
 wrote:
 Well, you might be in luck! Morgan Fainberg actually implemented an 
 improvement that was apparently documented by Adam Young way back in March: 
 
   https://bugs.launchpad.net/keystone/+bug/1287757
 
 There's a link to the stable/kilo backport in comment #2 - I'd be eager to 
 hear how it performs for you!
 
 On Tue, Jul 21, 2015 at 5:58 PM, Matt Fischer m...@mattfischer.com wrote:
 Dolph,
 
 Excuse the delayed reply, was waiting for a brilliant solution from 
 someone. Without one, personally I'd prefer the cronjob as it seems to be 
 the type of thing cron was designed for. That will be a painful change as 
 people now rely on this behavior so I don't know if its feasible. I will be 
 setting up monitoring for the revocation count and alerting me if it 
 crosses probably 500 or so. If the problem gets worse then I think a custom 
 no-op or sql driver is the next step.
 
 Thanks.
 
 
 On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews dolph.math...@gmail.com 
 wrote:
 
 
 On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer m...@mattfischer.com 
 wrote:
 I'm having some issues with keystone revocation events. The bottom line 
 is that due to the way keystone handles the clean-up of these events[1], 
 having more than a few leads to:
 
  - bad performance, up to 2x slower token validation with about 600 
 events based on my perf measurements.
  - database deadlocks, which cause API calls to fail, more likely with 
 more events it seems
 
 I am seeing this behavior in code from trunk on June 11 using Fernet 
 tokens, but the token backend does not seem to make a difference.
 
 Here's what happens to the db in terms of deadlock:
 2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock: 
 (OperationalError) (1213, 'Deadlock found when trying to get lock; try 
 restarting transaction') 'DELETE FROM revocation_event WHERE 
 revocation_event.revoked_at  %s' (datetime.datetime(2015, 7, 15, 18, 55, 
 41, 55186),)
 
 When this starts happening, I just go truncate the table, but this is not 
 ideal. If [1] is really true then the design is not great, it sounds like 
 keystone is doing a revocation event clean-up on every token validation 
 call. Reading and deleting/locking from my db cluster is not something I 
 want to do on every validate call.
 
 Unfortunately, that's *exactly* what keystone is doing. Adam and I had a 
 conversation about this problem in Vancouver which directly resulted in 
 opening the bug referenced on the operator list:
 
   https://bugs.launchpad.net/keystone/+bug/1456797
 
 Neither of us remembered the actual implemented behavior, which is what 
 you've run into and Deepti verified in the bug's comments.
  
 
 So, can I turn of token revocation for now? I didn't see an obvious no-op 
 driver.
 
 Not sure how, other than writing your own no-op driver, or perhaps an 
 extended driver that doesn't try to clean the table on every read?
  
 And in the long-run can this be fixed? I'd rather do almost anything 
 else, including writing a cronjob than what happens now.
 
 If anyone has a better solution than the current one, that's also better 
 than requiring a cron job on something like keystone-manage 
 revocation_flush I'd love to hear it.
 
 
 [1] - 
 http://lists.openstack.org/pipermail/openstack-operators/2015-June/007210.html
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Changbin Liu
Thanks, John.

I am happy to try out Swift EC and will report bugs to launchpad if
necessary.

(The issue I found happens to be different from the ones listed in the bug
list. See my other email)



Thanks

Changbin

On Tue, Jul 21, 2015 at 5:05 PM, John Dickinson m...@not.mn wrote:

 Yes, it's supposed to work, but you've run in to some errors we've been
 finding and fixing. Right now the top priority for the Swift dev community
 is to take care of the outstanding EC issues and make a release.

 The list of the known EC bugs right now is
 https://bugs.launchpad.net/swift/+bugs?field.tag=ec. You'll see that
 nearly all of them are handled, and the rest are being worked on. We will
 have them fixed and a new Swift release ASAP.

 Specifically, I think you were hitting bug
 https://bugs.launchpad.net/swift/+bug/1469094 (or maybe
 https://bugs.launchpad.net/swift/+bug/1452619).

 I'm so happy you're trying out erasure codes in Swift! That's exactly what
 we need to happen. As the docs say, it's still a beta feature. Please let
 us know what you find. Bug reports are very helpful, but even mailing list
 posts or dropping in the #openstack-swift channel in IRC is appreciated.

 --John




  On Jul 21, 2015, at 1:38 PM, Changbin Liu changbin@gmail.com
 wrote:
 
  Folks,
 
  To test the latest feature of Swift erasure coding, I followed this
 document (
 http://docs.openstack.org/developer/swift/overview_erasure_code.html) to
 deploy a simple cluster. I used Swift 2.3.0.
 
  I am glad that operations like object PUT/GET/DELETE worked fine. I can
 see that objects were correctly encoded/uploaded and downloaded at proxy
 and object servers.
 
  However, I noticed that swift-object-reconstructor seemed don't work as
 expected. Here is my setup: my cluster has three object servers, and I use
 this policy:
 
  [storage-policy:1]
  policy_type = erasure_coding
  name = jerasure-rs-vand-2-1
  ec_type = jerasure_rs_vand
  ec_num_data_fragments = 2
  ec_num_parity_fragments = 1
  ec_object_segment_size = 1048576
 
  After I uploaded one object, I verified that: there was one data
 fragment on each of two object servers, and one parity fragment on the
 third object server. However, when I deleted one data fragment, no matter
 how long I waited, it never got repaired, i.e., the deleted data fragment
 was never regenerated by the swift-object-reconstructor process.
 
  My question: is swift-object-reconstructor supposed to be NOT WORKING
 given the current implementation status? Or, is there any configuration I
 missed in setting up swift-object-reconstructor?
 
  Thanks
 
  Changbin
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread Assaf Muller
I added a summary of my thoughts about the enhancements I think we could
make to the Nova scheduler in order to better support the Neutron provider
networks use case.

- Original Message -
 On Tue, Jul 21, 2015 at 1:11 PM, John Belamaric jbelama...@infoblox.com
 wrote:
  Wow, a lot to digest in these threads. If I can summarize my understanding
  of the two proposals. Let me know whether I get this right. There are a
  couple problems that need to be solved:
 
   a. Scheduling based on host reachability to the segments
   b. Floating IP functionality across the segments. I am not sure I am clear
  on this one but it sounds like you want the routers attached to the
  segments
  to advertise routes to the specific floating IPs. Presumably then they
  would
  do NAT or the instance would assign both the fixed IP and the floating IP
  to
  its interface?
 
  In Proposal 1, (a) is solved by associating segments to the front network
  via a router - that association is used to provide a single hook into the
  existing API that limits the scope of segment selection to those associated
  with the front network. (b) is solved by tying the floating IP ranges to
  the
  same front network and managing the reachability with dynamic routing.
 
  In Proposal 2, (a) is solved by tagging each network with some meta-data
  that the IPAM system uses to make a selection. This implies an IP
  allocation
  request that passes something other than a network/port to the IPAM
  subsystem. This fine from the IPAM point of view but there is no
  corresponding API for this right now. To solve (b) either the IPAM system
  has to publish the routes or the higher level management has to ALSO be
  aware of the mappings (rather than just IPAM).
 
 John, from your summary above, you seem to have the best understanding
 of the whole of what I was weakly attempting to communicate.  Thank
 you for summarizing.
 
  To throw some fuel on the fire, I would argue also that (a) is not
  sufficient and address availability needs to be considered as well (as
  described in [1]). Selecting a host based on reachability alone will fail
  when addresses are exhausted. Similarly, with (b) I think there needs to be
  consideration during association of a floating IP to the effect on routing.
  That is, rather than a huge number of host routes it would be ideal to
  allocate the floating IPs in blocks that can be associated with the backing
  networks (though we would want to be able to split these blocks as small as
  a /32 if necessary - but avoid it/optimize as much as possible).
 
 Yes, address availability is a factor and must be considered in either
 case.  My email was getting long already and I thought that could be
 considered separately since I believe it applies regardless of the
 outcome of this thread.  But, since it seems to be an essential part
 of this conversation, let me say something about it.
 
 Ultimately, we need to match up the host scheduled by Nova to the
 addresses available to that host.  We could do this by delaying
 address assignment until after host binding or we could do it by
 including segment information from Neutron during scheduling.  The
 latter has the advantage that we can consider IP availability during
 scheduling.  That is why GoDaddy implemented it that way.
 
  In fact, I think that these proposals are more or less the same - it's just
  in #1 the meta-data used to tie the backing networks together is another
  network. This allows it to fit in neatly with the existing APIs. You would
  still need to implement something prior to IPAM or within IPAM that would
  select the appropriate backing network.
 
 They are similar but to say they're the same is going a bit too far.
 If they were the same then we'd be done with this conversation.  ;)
 
  As a (gulp) third alternative, we should consider that the front network
  here is in essence a layer 3 domain, and we have modeled layer 3 domains as
  address scopes in Liberty. The user is essentially saying give me an
  address that is routable in this scope - they don't care which actual
  subnet it gets allocated on. This is conceptually more in-line with [2] -
  modeling L3 domain separately from the existing Neutron concept of a
  network
  being a broadcast domain.
 
 I will consider this some more.  This is an interesting thought.
 Address scopes and subnet pools could play a role here.  I don't yet
 see how it can all fit together but it is worth some thought.
 
 One nit:  the neutron network might have been conceived as being just
 a broadcast domain but, in practice, it is L2 and L3.  The Neutron
 subnet is not really an L3 construct; it is just a cidr and doesn't
 make sense on its own without considering its association with a
 network and the other subnets associated with the same network.
 
  Fundamentally, however we associate the segments together, this comes down
  to a scheduling problem. Nova needs to be able to incorporate data from
  

Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-22 Thread ZZelle
Thanks Carl, Kevin, Assaf and everybody for your great support!

Looking forward to helping out neutron growth!


Cedric/ZZelle@IRC



On Wed, Jul 22, 2015 at 7:02 PM, Miguel Lavalle mig...@mlavalle.com wrote:

 Congrats Cedric!

 On Wed, Jul 22, 2015 at 10:37 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It has been a week since this nomination.  I'm pleased to confirm
 Cedric as a core reviewer for these areas of focus.  We look forward
 to your continued contribution to the project!

 Carl

 On Wed, Jul 15, 2015 at 12:47 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:
  As the Neutron L3 Lieutenant along with Kevin Benton for control
  plane, and Assaf Muller for testing, I would like to propose Cedric
  Brandily as a member of the Neutron core reviewer team under these
  areas of focus.
 
  Cedric has been a long time contributor to Neutron showing expertise
  particularly in these areas.  His knowledge and involvement will be
  very important to the project.  He is a trusted member of our
  community.  He has been reviewing consistently [1][2] and community
  feedback that I've received indicates that he is a solid reviewer.
 
  Existing Neutron core reviewers from these areas of focus, please vote
  +1/-1 for the addition of Cedric to the team.
 
  Thanks!
  Carl Baldwin
 
  [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
  [2] http://stackalytics.com/report/contribution/neutron-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr][Kolla] - Bringing Dockers networking to Neutron

2015-07-22 Thread Fox, Kevin M
Awesome. :)

Dockerization of Neutron plugins is already in scope of the Kolla project. 
Might want to coordinate the effort with the Kolla team.

Thanks,
Kevin


From: Gal Sagie
Sent: Wednesday, July 22, 2015 9:28:50 AM
To: OpenStack Development Mailing List (not for usage questions); Eran Gampel; 
Antoni Segura Puimedon; Irena Berezovsky
Subject: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking to 
Neutron


Hello Everyone,

Project Kuryr is now officially part of Neutron's big tent.
Kuryr is aimed to be used as a generic docker remote driver that connects 
docker to Neutron API's
and provide containerised images for the common Neutron plugins.
We also plan on providing common additional networking services API's from 
other sub projects
in the Neutron big tent.

We hope to get everyone on board with this project and leverage this joint 
effort for the many different solutions out there (instead of everyone 
re-inventing the wheel for each different project).

We want to start doing a weekly IRC meeting to coordinate the different 
requierments and
tasks, so anyone that is interested to participate please share your time 
preference
and we will try to find the best time for the majority.

Remember we have people in Europe, Tokyo and US, so we won't be able to find 
time that fits
everyone.

The currently proposed time is  Wedensday at 16:00 UTC

Please reply with your suggested time/day,
Hope to see you all, we have an interesting and important project ahead of us

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Clay Gerrard
On Wed, Jul 22, 2015 at 12:37 PM, Luse, Paul E paul.e.l...@intel.com
wrote:



 Wrt why the replication code seems to work if you delete just a .data


no it doesn't - https://gist.github.com/clayg/88950d77d25a441635e6


  forces a listing every 10 passes for some reason.  Clay?


IIRC the every 10 passes trick is for suffix dirs, if that's not in the
reconstructor we might should add it, easy test would be to rm a suffix
tree and let the reconstructor run for awhile.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Minimum Unit Test Coverage

2015-07-22 Thread Rajat Vig
Hi Rob

I agree. Enforcing a minimum level of coverage as a start is awesome.

I must add though keeping it at 100% and breaking the build has almost
never worked in practice for me.
Keeping a slightly lower level ~98% is slightly more pragmatic.
Also, the currently low coverages will have to be addressed as well.
Is there a blueprint that can be created to tackle it?

-Rajat


On Wed, Jul 22, 2015 at 6:33 AM, Rob Cresswell (rcresswe) 
rcres...@cisco.com wrote:

  Hi all,

  As far as I’m aware, we don’t currently enforce any minimum unit test
 coverage, despite Karma generating reports. I think as part of the review
 guidelines, it would be useful to set a minimum. Since Karma’s detection is
 fairly relaxed, I’d put it at 100% on the automated reports.

  I think the biggest drawback is that the tests may not be “valuable”,
 but rather just meet the minimum requirements. I understand this sentiment,
 but I think that “less valuable” is better then “not present” and it gives
 reviewers a clear line to +1/ -1 a patch. Furthermore, it encourages the
 unit tests to be written in the first place, so that reviewers can then ask
 for improvements, rather than miss them.

  Rob

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Specs process query

2015-07-22 Thread Ruby Loo
On 20 July 2015 at 08:27, Matt Keenan matt.kee...@oracle.com wrote:

 On 07/06/15 23:19, Ruby Loo wrote:

 On 1 July 2015 at 08:25, Matt Keenan matt.kee...@oracle.com
 mailto:matt.kee...@oracle.com wrote:

 Hi,

 In submitting my first ironic spec, I am following the process
 outlined at:
 https://wiki.openstack.org/wiki/Ironic/Specs_Process

 As of Kilo this suggests we also follow:


 http://lists.openstack.org/pipermail/openstack-dev/2014-August/041960.html

 This indicates that once a spec is registered before submission of
 the spec text the registered spec needs to be given the ok.

 Quick discussion on IRC indicates that this was never adhered to. If
 it's not going to be adhered to then I'd suggest removing this
 reference from Specs_Process.

 cheers

 Matt


 Hi Matt,

 My interpretation of the email you referenced, was to help 'fast-track'
 two things: 1. new 'features' that didn't require a spec to be written
 and 2. new 'features' that are out of scope or something that just won't
 work for whatever reason.

 I believe it may be true (although I haven't read all the proposed
 specs) that no one has actually followed that process, but I don't know
 if that means we should not provide that as a choice. Are you
 interpreting it as 'You must follow this process' as opposed to 'You
 could choose to follow this process'?


 My interpretation was You must follow this process, but if it's optional
 then not an issue I guess.

 cheers

 Matt


Thanks for the feedback Matt. I've updated the wiki so that it is clearer
(I hope) that it is optional!

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Changbin Liu
Got it, thanks!


Thanks

Changbin

On Wed, Jul 22, 2015 at 4:19 PM, Clay Gerrard clay.gerr...@gmail.com
wrote:



 On Wed, Jul 22, 2015 at 12:24 PM, Changbin Liu changbin@gmail.com
 wrote:


 But now I wonder: is it by design that EC does not handle an accidental
 deletion of just the data file?


 Well, the design goal was not do not handle the accidental deletion of
 just the data file - it was make replication fast enough that it works -
 and that required not listing all the dirs all the time.


 Deleting both data file and hashes.pkl file is more like a
 deliberately-created failure case instead of a normal one.


 To me deleting some file that swift wrote to disk without updating (or
 removing) the index it normally updates during write/delete/replicate to
 accelerate replication seems like a deliberately created failure case?  You
 could try to flip a bit or truncate a data file and let the auditor pick it
 up.  Or rm a suffix and wait for the every-so-often suffixdir listdir to
 catch it, or remove an entire partition, or wipe a new filesystem onto the
 disk.  Or shutdown a node and do a PUT, then shutdown the handoff node, and
 run the reconstructor.  Any of the normal failure conditions like that
 (and plenty more!) are all detected by and handled efficiently.

 To me Swift EC repairing seems different from the triple-replication mode,
 where you delete any data file copy, it will be restored.



 Well, replication and reconstruction are different in lots of ways - but
 not this part.  If you rm a .data file without updating the index you'll
 need some activity (post/copy/put/quarantine) in the suffix before the
 replication engine can notice.

 Luckily (?) people don't often go under the covers into the middle of the
 storage system and rm data like that?

 -Clay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-07-22 Thread Gregory Haynes
Excerpts from Derek Higgins's message of 2015-07-21 19:29:49 +:
 Hi All,
 Something we discussed at the summit was to switch the focus of 
 tripleo's deployment method to deploy using instack using images built 
 with tripleo-puppet-elements. Up to now all the instack work has been 
 done downstream of tripleo as part of rdo. Having parts of our 
 deployment story outside of upstream gives us problems mainly because it 
 becomes very difficult to CI what we expect deployers to use while we're 
 developing the upstream parts.
 
 Essentially what I'm talking about here is pulling instack-undercloud 
 upstream along with a few of its dependency projects (instack, 
 tripleo-common, tuskar-ui-extras etc..) into tripleo and using them in 
 our CI in place of devtest.
 
 Getting our CI working with instack is close to working but has taken 
 longer then I expected because of various complications and distractions 
 but I hope to have something over the next few days that we can use to 
 replace devtest in CI, in a lot of ways this will start out by taking a 
 step backwards but we should finish up in a better place where we will 
 be developing (and running CI on) what we expect deployers to use.
 
 Once I have something that works I think it makes sense to drop the jobs 
 undercloud-precise-nonha and overcloud-precise-nonha, while switching 
 overcloud-f21-nonha to use instack, this has a few effects that need to 
 be called out
 
 1. We will no longer be running CI on (and as a result not supporting) 
 most of the the bash based elements
 2. We will no longer be running CI on (and as a result not supporting) 
 ubuntu

I'd like to point out that this means DIB will no longer have an image
booting test for Ubuntu. I have created a review[1] to try and get some
coverage of this in a dib speific test, hopefully we can get it merged
before we remove the tripleo ubuntu tests?

 
 Should anybody come along in the future interested in either of these 
 things (and prepared to put the time in) we can pick them back up again. 
 In fact the move to puppet element based images should mean we can more 
 easily add in extra distros in the future.
 
 3. While we find our feet we should remove all tripleo-ci jobs from non 
 tripleo projects, once we're confident with it we can explore adding our 
 jobs back into other projects again

I assume DIB will be keeping the tripleo jobs for now?

 
 Nothing has changed yet, I order to check we're all on the same page 
 this is high level details of how I see things should proceed so shout 
 now if I got anything wrong or you disagree.
 
 Sorry for not sending this out sooner for those of you who weren't at 
 the summit,
 Derek.
 

-Greg

[1] https://review.openstack.org/#/c/204639/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Monasca mid-cycle meetup

2015-07-22 Thread Simon Pasquier
Hi,
I've had a quick look at the Etherpad which mentions Ceilosca. The name
is quite intriguing but I didn't find any reference to it in the Monasca
Wiki. Could you tell us a bit more about it? Does it mean that Monasca
plans to expose an API that would be compatible with the Ceilometer API?
BR,
Simon

On Wed, Jul 22, 2015 at 8:08 PM, Hochmuth, Roland M roland.hochm...@hp.com
wrote:

 The Monasca mid-cycle meet up will be held at the HP campus in Fort
 Collins, CO from August 5-6. Further details on the location, time and
 tentative agenda can be found at

 https://etherpad.openstack.org/p/monasca_liberty_mid_cycle

 Regards --Roland

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Clay Gerrard
On Wed, Jul 22, 2015 at 12:24 PM, Changbin Liu changbin@gmail.com
wrote:


 But now I wonder: is it by design that EC does not handle an accidental
 deletion of just the data file?


Well, the design goal was not do not handle the accidental deletion of
just the data file - it was make replication fast enough that it works -
and that required not listing all the dirs all the time.


 Deleting both data file and hashes.pkl file is more like a
 deliberately-created failure case instead of a normal one.


To me deleting some file that swift wrote to disk without updating (or
removing) the index it normally updates during write/delete/replicate to
accelerate replication seems like a deliberately created failure case?  You
could try to flip a bit or truncate a data file and let the auditor pick it
up.  Or rm a suffix and wait for the every-so-often suffixdir listdir to
catch it, or remove an entire partition, or wipe a new filesystem onto the
disk.  Or shutdown a node and do a PUT, then shutdown the handoff node, and
run the reconstructor.  Any of the normal failure conditions like that
(and plenty more!) are all detected by and handled efficiently.

To me Swift EC repairing seems different from the triple-replication mode,
 where you delete any data file copy, it will be restored.



Well, replication and reconstruction are different in lots of ways - but
not this part.  If you rm a .data file without updating the index you'll
need some activity (post/copy/put/quarantine) in the suffix before the
replication engine can notice.

Luckily (?) people don't often go under the covers into the middle of the
storage system and rm data like that?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-07-22 Thread Dmitry Tantsur

On 07/21/2015 09:29 PM, Derek Higgins wrote:

Hi All,
Something we discussed at the summit was to switch the focus of
tripleo's deployment method to deploy using instack using images built
with tripleo-puppet-elements. Up to now all the instack work has been
done downstream of tripleo as part of rdo. Having parts of our
deployment story outside of upstream gives us problems mainly because it
becomes very difficult to CI what we expect deployers to use while we're
developing the upstream parts.

Essentially what I'm talking about here is pulling instack-undercloud
upstream along with a few of its dependency projects (instack,
tripleo-common, tuskar-ui-extras etc..) into tripleo and using them in
our CI in place of devtest.


+1



Getting our CI working with instack is close to working but has taken
longer then I expected because of various complications and distractions
but I hope to have something over the next few days that we can use to
replace devtest in CI, in a lot of ways this will start out by taking a
step backwards but we should finish up in a better place where we will
be developing (and running CI on) what we expect deployers to use.

Once I have something that works I think it makes sense to drop the jobs
undercloud-precise-nonha and overcloud-precise-nonha, while switching
overcloud-f21-nonha to use instack, this has a few effects that need to
be called out

1. We will no longer be running CI on (and as a result not supporting)
most of the the bash based elements


/me wants bash based elements to be killed with fire. Do we even need 
them after the switch?



2. We will no longer be running CI on (and as a result not supporting)
ubuntu

Should anybody come along in the future interested in either of these
things (and prepared to put the time in) we can pick them back up again.
In fact the move to puppet element based images should mean we can more
easily add in extra distros in the future.

3. While we find our feet we should remove all tripleo-ci jobs from non
tripleo projects, once we're confident with it we can explore adding our
jobs back into other projects again


I hope we will, we did catch a couple of problems via tripleo-ci in ironic.



Nothing has changed yet, I order to check we're all on the same page
this is high level details of how I see things should proceed so shout
now if I got anything wrong or you disagree.

Sorry for not sending this out sooner for those of you who weren't at
the summit,
Derek.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Jenkins jobs are not executed when setting up a new CI system.

2015-07-22 Thread Tang Chen


On 07/22/2015 12:49 PM, Tang Chen wrote:

Hi all,

When I send a patch to gerrit, my zuul is notified, but jenkins jobs 
are not run.


My CI always reports the following error:

Merge Failed.

This change was unable to be automatically merged with the current state of the 
repository. Please rebase your change and upload a new patchset.

I think, because the patch cannot be merged, the jobs are not run.

Referring tohttps://www.mediawiki.org/wiki/Gerrit/Advanced_usage,I did update 
my master branch and make sure it is up-to-date. But it doesn't work.And other 
CIs from other companies didn't report this error.


process_event_queue()
 |-- pipeline.manager.addChange()
 |-- report Unable to find change queue for change Change 
0x7fa7ef8b6250 204446,1 in project openstack-dev/sandbox


204446 is my patch number.

Anyone knows why is that ?

Thanks.





And also, when zuul tries to get the patch from gerrit, it executes:

gerrit query --format json --all-approvals --comments --commit-message 
--current-patch-set --dependencies --files --patch-sets --submit-records 204337


When I try to execute it myself, it reports:Permission denied (publickey).

I updated my ssh key, and uploaded the new public key to gerrit, but it doesn't 
work.


Does anyone have any idea what's going on here ?

Thanks.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Jenkins jobs are not executed when setting up a new CI system.

2015-07-22 Thread Abhishek Shrivastava
​Hi Tang,
​
Reboot your master vm and then try the same. Also after restarting check
the status of zuul and zuul-merger.

On Wed, Jul 22, 2015 at 12:14 PM, Tang Chen tangc...@cn.fujitsu.com wrote:


 On 07/22/2015 12:49 PM, Tang Chen wrote:

 Hi all,

 When I send a patch to gerrit, my zuul is notified, but jenkins jobs are
 not run.

 My CI always reports the following error:

 Merge Failed.
 This change was unable to be automatically merged with the current state of 
 the repository. Please rebase your change and upload a new patchset.

 I think, because the patch cannot be merged, the jobs are not run.Referring 
 to https://www.mediawiki.org/wiki/Gerrit/Advanced_usage, I did update my 
 master branch and make sure it is up-to-date. But it doesn't work. And other 
 CIs from other companies didn't report this error.


 process_event_queue()
  |-- pipeline.manager.addChange()
  |-- report Unable to find change queue for change Change
 0x7fa7ef8b6250 204446,1 in project openstack-dev/sandbox

 204446 is my patch number.

 Anyone knows why is that ?

 Thanks.


 And also, when zuul tries to get the patch from gerrit, it executes:

 gerrit query --format json --all-approvals --comments --commit-message 
 --current-patch-set --dependencies --files --patch-sets --submit-records 
 204337


 When I try to execute it myself, it reports:Permission denied (publickey).

 I updated my ssh key, and uploaded the new public key to gerrit, but it 
 doesn't work.


 Does anyone have any idea what's going on here ?

 Thanks.






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Jenkins jobs are not executed when setting up a new CI system.

2015-07-22 Thread Gary Kotton
The gate is under heavy load at the moment. Maybe in a day or two it will get 
back to usual…

From: Tang Chen tangc...@cn.fujitsu.commailto:tangc...@cn.fujitsu.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, July 22, 2015 at 11:57 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [CI] Jenkins jobs are not executed when setting up 
a new CI system.

Hi Abhishek,


On 07/22/2015 03:56 PM, Abhishek Shrivastava wrote:
​Hi Tang,
​
Reboot your master vm and then try the same. Also after restarting check the 
status of zuul and zuul-merger.


I found the problem. zuul could not fetch repo because of the proxy..
And as a result, merge failed. So the job was not run.

Now it is OK. :)

Thanks. :)

On Wed, Jul 22, 2015 at 12:14 PM, Tang Chen 
tangc...@cn.fujitsu.commailto:tangc...@cn.fujitsu.com wrote:

On 07/22/2015 12:49 PM, Tang Chen wrote:
Hi all,

When I send a patch to gerrit, my zuul is notified, but jenkins jobs are not 
run.

My CI always reports the following error:


Merge Failed.This change was unable to be automatically merged with the current 
state of the repository. Please rebase your change and upload a new patchset.

I think, because the patch cannot be merged, the jobs are not run.
Referring to 
https://www.mediawiki.org/wiki/Gerrit/Advanced_usagehttps://urldefense.proofpoint.com/v2/url?u=https-3A__www.mediawiki.org_wiki_Gerrit_Advanced-5Fusaged=BQMCAwc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=xPjZxaKbI6A4_QChVH5yd1mZH5dpUhZpBpBgzv53YxEs=uPutfRk7Ex5dkTVYP7ZbGR11euc8pSs1G0txZICkGWMe=,
 I did update my master branch and make sure it is up-to-date. But it doesn't 
work. And other CIs from other companies didn't report this error.

process_event_queue()
 |-- pipeline.manager.addChange()
 |-- report Unable to find change queue for change Change 
0x7fa7ef8b6250 204446,1 in project openstack-dev/sandbox

204446 is my patch number.

Anyone knows why is that ?

Thanks.



And also, when zuul tries to get the patch from gerrit, it executes:

gerrit query --format json --all-approvals --comments --commit-message 
--current-patch-set --dependencies --files --patch-sets --submit-records 204337


When I try to execute it myself, it reports:Permission denied (publickey).

I updated my ssh key, and uploaded the new public key to gerrit, but it doesn't 
work.


Does anyone have any idea what's going on here ?

Thanks.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte 
Inc.https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cloudbyte.comd=BQMCAwc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=xPjZxaKbI6A4_QChVH5yd1mZH5dpUhZpBpBgzv53YxEs=KOIy9LLJBcxT-zC6XENVPpTuvQS7kAtxH169pLrA-sMe=



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel Zabbix in deployment tasks

2015-07-22 Thread Sergii Golovatiuk
Hi,

The code is present in 6.1 also.

https://github.com/stackforge/fuel-library/blob/stable/6.1/deployment/puppet/osnailyfacter/modular/zabbix/tasks.yaml

I changed the status of bug to Confirmed for 6.1 as it's 100% done.



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Jul 21, 2015 at 10:54 PM, Stanislaw Bogatkin sbogat...@mirantis.com
 wrote:

 Actually, I didn't participate in that process a lot - just reviewed
 plugin couple of times and as I know, we had had a commits that deleted
 zabbix from current Fuel.
 There is bug about that: https://bugs.launchpad.net/fuel/+bug/1455664
 There is a review: https://review.openstack.org/#/c/182615/

 Seems that it should be resolved and merged to have zabbix code actually
 deleted from current master.

 On Thu, Jul 16, 2015 at 1:29 PM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 I thought it was done...
 Stas - do you know anything about it?

 On Thu, Jul 16, 2015 at 9:18 AM Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 Hi,

 Working on granular deployment, I realized we still call zabbix.pp in
 deployment tasks. As far as I know zabbix was moved to plugin. Should we
 remove zabbix from
 1. Deployment graph
 2. fixtures
 3. Tests
 4. Any other places

 Are we going to clean up zabbix code as part of migration to plugin?

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL feature status

2015-07-22 Thread Stanislaw Bogatkin
Hi,

we have a spec that says we disable SSL by default and it was successfully
merged with that, no one was against such decision. So, if we want to
enable it by default now - we can. It should be done as a part of our usual
process, I think - I'll create a bug for it and fix it.

Current status of feature is next:
1. All codebase for SSL is merged
2. Some tests for it writing my QA - not all of them are done yet.

I'll update blueprints as soon as possible. Sorry for inconvenience.

On Mon, Jul 20, 2015 at 8:44 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Hi guys,
 did we enable SSL for Fuel Master node and OpenStack REST API endpoints by
 default? If not, let's enable it by default. I don't know why we should not.

 Looks like we need to update blueprints as well [1], [2], as they don't
 seem to reflect current status of the feature.

 [1] https://blueprints.launchpad.net/fuel/+spec/ssl-endpoints
 [2] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints

 Stas, as you've been working on it, can you please provide current status?

 Thanks,

 --
 Mike Scherbakov
 #mihgen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL feature status

2015-07-22 Thread Mike Scherbakov
Thanks Stas. My opinion is that it has to be enabled by default. I'd like
product management to shine in here. Sheena?


On Tue, Jul 21, 2015 at 11:06 PM Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 Hi,

 we have a spec that says we disable SSL by default and it was successfully
 merged with that, no one was against such decision. So, if we want to
 enable it by default now - we can. It should be done as a part of our usual
 process, I think - I'll create a bug for it and fix it.

 Current status of feature is next:
 1. All codebase for SSL is merged
 2. Some tests for it writing my QA - not all of them are done yet.

 I'll update blueprints as soon as possible. Sorry for inconvenience.

 On Mon, Jul 20, 2015 at 8:44 PM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Hi guys,
 did we enable SSL for Fuel Master node and OpenStack REST API endpoints
 by default? If not, let's enable it by default. I don't know why we should
 not.

 Looks like we need to update blueprints as well [1], [2], as they don't
 seem to reflect current status of the feature.

 [1] https://blueprints.launchpad.net/fuel/+spec/ssl-endpoints
 [2] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints

 Stas, as you've been working on it, can you please provide current status?

 Thanks,

 --
 Mike Scherbakov
 #mihgen


 --
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Plugins] Node role as a plugin is in master

2015-07-22 Thread Igor Kalnitsky
Hi fuelers,

I'm glad to announce that bp/role-as-a-plugin story [1] has been
successfully implemented. Since yesterday all functional is in master,
feel free to use it in your plugins.

Briefly, now you are able..

* to define new node roles in the similar to openstack.yaml way with
all supported meta flags (e.g. public_ip_requires, updates_requires,
etc)
* to define new deployment tasks for new and/or existing node roles
(you can inject new task almost everywhere, since it will go through
one global deployment graph)
* to define new volumes and volumes mapping schema for your node roles
(you don't need to perform partitioning in your deployment script on
your own)
* to overwrite any deployment task for existing roles or even skip it
so it won't be executed at all
* to reuse some useful core tasks in your roles (e.g. hiera, globals,
netconfig, etc)

Bonus:

If you need something to edit in installed plugins, you can edit it
inplace (/var/www/nailgun/plugins/your-plugin) and apply all changes
by asking Nailgun to sync it:

  $ fuel plugins --sync

Please note, in order to use all these features you need

* to run Fuel ISO from master branch
* to use fuel plugin builder from master branch

Thanks,
Igor

[1]: https://blueprints.launchpad.net/fuel/+spec/role-as-a-plugin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tricircle]Team Weekly Meeting 2015.07.22

2015-07-22 Thread Zhipeng Huang
Hi Team,

As usual we will have meeting at UTC 1300. We will be discussing
https://review.openstack.org/201224, zhiyuan might has some input for Saggi
:)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard  Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [stable] Freze exception

2015-07-22 Thread Timofei Durakov
Hello

I'd like to ask for a freeze exception for the follow bug fix:

*https://review.openstack.org/#/c/198340/
https://review.openstack.org/#/c/198340/*

bug: *https://bugs.launchpad.net/nova/+bug/197
https://bugs.launchpad.net/nova/+bug/197*

merged bug fix in master: *https://review.openstack.org/#/c/173913/
https://review.openstack.org/#/c/173913/*

During live migration _post_live_migration and
post_live_migration_at_destination_method are executed simultaneously,
because second one is called over rpc.cast.
In _post_live_migration method there was setup_network_on_host call
with teardown=True, which expects new host in instances table db
field.

This update could be happened later, as it executes on destination
node in second method. To guarantee execution order
setup_network_on_host call, which cleans
dhcp-hostfile is moved to destination node.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-22 Thread Igor Kalnitsky
Hi guys,

@Sergii, it looks like you misunderstood something. `node-uuid` is not
a general use case. It's only about conflicting nodes, and I'm sure
everyone's going to change such a hostname in order to avoid
confusion.

@Andrew,

a) Database refuses hostnames that break unique constraint, sot it'll
work out-of-box.

b) I like this idea. I think refusing `node-id` where `id` is not
actually a node id is good idea. It solves our problem.

Thanks,
Igor

On Wed, Jul 22, 2015 at 8:21 AM, Sergii Golovatiuk
sgolovat...@mirantis.com wrote:
 node-uuid is very terrible from UX perspective of view. Ask support people
 if they are comfortable to ssh such nodes or telling the name in phone
 conversation with customer. If we cannot validate FQDN of hostname I would
 slip this feature to next release where we can pay more attention to
 details.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-22 Thread Jaime Fernández
I moved the virtual machine where designate processes are running into a
host in the same LAN than OST. Now the designate-api process does not die
anymore (still using qpid). I'm afraid that some network problem or timeout
could be the reason for this problem. We'll keep monitoring this process to
confirm it.

I understand that it's a bug of designate-api or oslo.messaging library.

On Tue, Jul 21, 2015 at 1:19 PM, Jaime Fernández jjja...@gmail.com wrote:

 Hi Kiall,

 It's a bit strange because only designate-api dies but designate-sink is
 also integrated with qpid and survives.

 These issues are a bit difficult because it is not deterministic. What
 I've just tested is using a local qpid instance and it looks like the
 designate-api is not killed any more (however, it's a short period of
 time). We are going to integrate the host where designate components are
 installed in the same VLAN than the rest of OST just to check if it's a
 rare issue with the network.

 Before testing with Rabbit, as you recommended, we are testing with qpid
 in the same VLAN (just to discard the network issue).

 I will give you info about my progress.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] AGAIN HELP CONFIRM OR DISCUSS:create a port when network contain ipv4 subnets and ipv6 subnets, allocate ipv6 address to the port.

2015-07-22 Thread zhaobo
Hi ,
Could anyone please check the bug below? I had received message from some kind 
people, but I didn't get the answer which i want.
The key point is why the created port with specify ipv4 subnet as fixed-ip get 
an ipv6 addr when there are some ipv6 slaac/dhcpstateful subnets.
And I know the slaac subnet will automatic allocate an ipv6 addr. Users may be 
confused about this behavior.
https://bugs.launchpad.net/neutron/+bug/1467791


This bug description:
When the created network contains one ipv4 subnet and an ipv6 subnet which 
turned on slaac or stateless.
When I create a port use cmd like:
neutron port-create --fixed-ip subnet_id=$[ipv4_subnet_id] $[network_id/name]
The specified fixed-ip is ipv4_subnet, but the returned port contained the ipv6 
subnet.




If user just want a port with ipv4 , why returned port had allocated an ipv6 
address.
And I know this is an design behavior from 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/multiple-ipv6-prefixes.html#proposed-change
But we are still confused about this operation.


And when we create floatingip and the external network just contained an ipv6 
slaac subnet, the port will contained the ipv6 addr . Are there some issues ? 


Thank you anyone could help for confirm this issue , and wish you return the 
message asap.


ZhaoBo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] AGAIN HELP CONFIRM OR DISCUSS:create a port when network contain ipv4 subnets and ipv6 subnets, allocate ipv6 address to the port.

2015-07-22 Thread Sridhar Gaddam

Hello ZhaoBo,

The short summary of the BP [1] is that when a network contains an IPv6 
SLAAC/dhcpv6-stateless subnet, we use RADVD daemon to advertise the 
prefix. RADVD deamon periodically advertises the prefix info which is a 
multicast message received by all the hosts/VMs on the network. When 
an ipv6-capable host/VM receives this Router Advertisement message, it 
would generate and configure an EUI-64 IPv6 address (derived via the 
prefix and interface MAC address) to its interface.


I understand your concern saying that you only specified an IPv4 address 
for the port but you got IPv6 address as well.

This is because..
1. You have IPv6 subnets as part of the same network
2. The nature in which SLAAC works and is implemented in Neutron.

Basically, even if Neutron does not include the IPv6 address in the port 
information, the VM port would be having the IPv6 address configured on 
its interface. Since the port would anyway have the IPv6 address, 
Neutron includes the IPv6 addresses (i.e., SLAAC/DHCPv6-Stateless) for 
the user to be aware of the list of addresses on the port.


Please note that this behavior is only for IPv6 subnets configured with 
SLAAC/DHCPv6-Stateless and not for DHCPv6 Stateful subnets.


Regarding the floating-ip behavior, the issue is recently fixed with the 
following patch - https://review.openstack.org/#/c/198908/
[1] 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/multiple-ipv6-prefixes.html#proposed-change


Regards,
--Sridhar.

On 07/22/2015 04:35 PM, zhaobo wrote:

Hi ,
Could anyone please check the bug below? I had received message from 
some kind people, but I didn't get the answer which i want.
The key point is why the created port with specify ipv4 subnet as 
fixed-ip get an ipv6 addr when there are some ipv6 slaac/dhcpstateful 
subnets.
And I know the slaac subnet will automatic allocate an ipv6 addr. 
Users may be confused about this behavior.

https://bugs.launchpad.net/neutron/+bug/1467791

This bug description:
When the created network contains one ipv4 subnet and an ipv6 subnet 
which turned on slaac or stateless.

When I create a port use cmd like:
neutron port-create --fixed-ip subnet_id=$[ipv4_subnet_id] 
$[network_id/name]
The specified fixed-ip is ipv4_subnet, but the returned port contained 
the ipv6 subnet.



If user just want a port with ipv4 , why returned port had allocated 
an ipv6 address.
And I know this is an design behavior from 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/multiple-ipv6-prefixes.html#proposed-change

But we are still confused about this operation.

And when we create floatingip and the external network just contained 
an ipv6 slaac subnet, the port will contained the ipv6 addr . Are 
there some issues ?


Thank you anyone could help for confirm this issue , and wish you 
return the message asap.


ZhaoBo






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-07-22 Thread Steven Hardy
On Tue, Jul 21, 2015 at 08:29:49PM +0100, Derek Higgins wrote:
 Hi All,
Something we discussed at the summit was to switch the focus of tripleo's
 deployment method to deploy using instack using images built with
 tripleo-puppet-elements. Up to now all the instack work has been done
 downstream of tripleo as part of rdo. Having parts of our deployment story
 outside of upstream gives us problems mainly because it becomes very
 difficult to CI what we expect deployers to use while we're developing the
 upstream parts.
 
 Essentially what I'm talking about here is pulling instack-undercloud
 upstream along with a few of its dependency projects (instack,
 tripleo-common, tuskar-ui-extras etc..) into tripleo and using them in our
 CI in place of devtest.
 
 Getting our CI working with instack is close to working but has taken longer
 then I expected because of various complications and distractions but I hope
 to have something over the next few days that we can use to replace devtest
 in CI, in a lot of ways this will start out by taking a step backwards but
 we should finish up in a better place where we will be developing (and
 running CI on) what we expect deployers to use.
 
 Once I have something that works I think it makes sense to drop the jobs
 undercloud-precise-nonha and overcloud-precise-nonha, while switching
 overcloud-f21-nonha to use instack, this has a few effects that need to be
 called out
 
 1. We will no longer be running CI on (and as a result not supporting) most
 of the the bash based elements
 2. We will no longer be running CI on (and as a result not supporting)
 ubuntu

+1, it's been quite inconvenient having to constantly update the image
based templates when nobody is really using/maintaining them anymore AFAIK,
this should provide the opportunity to have a purge of deprecated stuff
from t-h-t and simplify things considerably.

 Should anybody come along in the future interested in either of these things
 (and prepared to put the time in) we can pick them back up again. In fact
 the move to puppet element based images should mean we can more easily add
 in extra distros in the future.

+1, it should actually make it easier to e.g integrate stuff like support
for containers, because we won't have to trip over the unmaintained
templates every time any role template's interfaces change.

 3. While we find our feet we should remove all tripleo-ci jobs from non
 tripleo projects, once we're confident with it we can explore adding our
 jobs back into other projects again

I hope we can make this very temporary - e.g I find the tripleo feedback on
the heat project helpful, particularly when merging a potentially risky
change, and as we know TripleO has proven an extremely good kitchen sink
integration test for many projects ;)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [all] CI Infrastructure down

2015-07-22 Thread Andreas Jaeger

Sorry, should have gone to openstack-docs instead of openstack-dev ;(

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Minimum Unit Test Coverage

2015-07-22 Thread Rob Cresswell (rcresswe)
Hi all,

As far as I’m aware, we don’t currently enforce any minimum unit test coverage, 
despite Karma generating reports. I think as part of the review guidelines, it 
would be useful to set a minimum. Since Karma’s detection is fairly relaxed, 
I’d put it at 100% on the automated reports.

I think the biggest drawback is that the tests may not be “valuable”, but 
rather just meet the minimum requirements. I understand this sentiment, but I 
think that “less valuable” is better then “not present” and it gives reviewers 
a clear line to +1/ -1 a patch. Furthermore, it encourages the unit tests to be 
written in the first place, so that reviewers can then ask for improvements, 
rather than miss them.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstacl-dev][openstack-tc][deb-packaging] Updated proposal for OpenStack deb packages

2015-07-22 Thread Igor Yozhikov
Hello again to everyone.

*Introductory words:*
 I want to present renewed proposal for packaging of OpenStack components
for deb based Linux distributions.
In case of stackforge retirement, I believe that new repositories for deb
specs should appears under the */openstack/* name-space instead of
*/stackforge/*.
This and further steps was also advised by dhellmann, anteaya and angdraug
at #openstack-meeting irc channel during/after TechnicalCommittee meeting
http://paste.openstack.org/show/399927/ yesterday 21 of Jul 2015.
Also it would be great to discuss all of this during next TC meeting July
28th.
Link on previous letter
http://lists.openstack.org/pipermail/openstack-dev/2015-July/069377.html

*Renewed proposal:*
 During the Liberty summit in Vancouver, the idea to maintain OpenStack
packages for Debian and Ubuntu on upstream infra sparked. As part of the
OpenStack and Debian community we still want to push for it. In case of
stackforge retirement it will be awesome to be able to work directly under
the /*openstack*/ name-space. Mostly, it is server packages for Debian
which we want to see maintained there. All of the dependencies (including
Oslo libraries and python-*client) will continue to be maintained using
git.debian.org, as a shared effort between Debian and Ubuntu.

Purpose:

   -

   One centralized place at github.com*/openstack/* to maintain package
   build manifests/specs for main OpenStack projects.
   -

   /*openstack*/*  already has gerrit where could be attached additional
   gates for building and testing OpenStack packages for deb based Linux
   distributions.
   -

   All changes related to the ways of how one or another OpenStack project
   should be installed would be represented or proposed to one place. So not
   only famous package maintainers can work on build manifests for packages,
   but also the entire community interested in packaging of OpenStack
   projects. Also this place could become as main base for packages for all
   deb based Linux distributions like Debian and Ubuntu.
   -

   Package build manifests could be reviewed or adjusted not only by
   package maintainers, but also by python developers, that are writing
   OpenStack code and that's could be very valuable due to possible changes in
   configuration, paths and so on.


Projects list:

Here I propose list of OpenStack projects, it consists of about 25 of
projects which of course would be changed with time due to possible birth
of new projects .

Project name

Possible github.com/openstack repository

ceilometer

http://github.com/openstack/deb-ceilometer

ceilometermiddleware

http://github.com/openstack/deb-ceilometermiddleware

cinder

http://github.com/openstack/deb-cinder

glance

http://github.com/openstack/deb-glance

glance_store

http://github.com/openstack/deb-glance_store

heat

http://github.com/openstack/deb-heat

horizon

http://github.com/openstack/deb-horizon

ironic

http://github.com/openstack/deb-ironic

keystone

http://github.com/openstack/deb-keystone

keystonemiddleware

http://github.com/openstack/deb-keystonemiddleware

mistral

http://github.com/openstack/deb-mistral

mistral-dashboard

http://github.com/openstack/deb-mistral-dashboard

murano

http://github.com/openstack/deb-murano

murano-dashboard

http://github.com/openstack/deb-murano-dashboard

neutron

http://github.com/openstack/deb-neutron

neutron-fwaas

http://github.com/openstack/deb-neutron-fwaas

neutron-lbaas

http://github.com/openstack/deb-neutron-lbaas

neutron-vpnaas

http://github.com/openstack/deb-neutron-vpnaas

nova

http://github.com/openstack/deb-nova

rally

http://github.com/openstack/deb-rally

sahara

http://github.com/openstack/deb-sahara

sahara-dashboard

http://github.com/openstack/deb-sahara-dashboard

swift

http://github.com/openstack/deb-swift

tempest

http://github.com/openstack/deb-tempest

trove

http://github.com/openstack/deb-trove





Thanks,
Igor Yozhikov
Senior Deployment Engineer
at Mirantis http://www.mirantis.com
skype: igor.yozhikov
cellular: +7 901 5331200
slack: iyozhikov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] meeting time (again)

2015-07-22 Thread Tim Hinrichs
Hi all,

Our new meeting time seemed to work well for almost everyone.
Unfortunately, there's one person for which the meeting time is 4:30a.  So
before our new meeting time becomes part of our routine, it's worth
figuring out if we can find a time where everyone is awake.

Here's a doodle poll.  Ignore the date on it.  I'm mainly interested in
when the non-US folks are available (meaning awake).  Sorry all the times
are Pacific--I couldn't find a way to include multiple time zones and
figured I'd be least likely to make mistakes this way.

http://doodle.com/intkzvgizusb7b4t

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [all] CI Infrastructure down

2015-07-22 Thread Andreas Jaeger

FYI, no need to recheck right now your failing jobs...

Andreas


 Forwarded Message 
Subject: [openstack-dev] [all] CI Infrastructure down
Date: Wed, 22 Jul 2015 15:08:10 +0200
From: Andreas Jaeger a...@suse.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Organization: SUSE Linux Products GmbH, Nuernberg, GF: Jeff Hawn, 
Jennifer Guild, Felix Imendörffer , HRB 16746 (AG Nuernberg)
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org


This morning we hit some problems with our CI infrastructure, the infra
team is still investigating and trying to fix it.

You can upload changes and comment on them but many jobs will fail with
NOT_REGISTERED.

Do not recheck these until the infrastructure is fixed again.

Also, there's no sense in approving changes currently, they might be hit
by the same NOT_REGISTERED jobs or fail in the post jobs.

So, happy coding, reviewing - and local testing for now,

Andreas
--
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
HRB 21284 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-22 Thread Fedor Zhadaev
Thanks for your answers.

Let me clarify some points:

Sure, we have to validate hostnames during node renaming. And sure we do
it. This issue appears when we already have node with name 'node-X' and new
node is created without providing custom name. I'll give you the example:

1. User has node with hostname 'node-4' (with ID = 4; and there no nodes
with ID  4) ;
2. He renames it in 'node-5' (this name is correct and unique. OK)
3. He adds new node without providing custom hostname.
New node gets ID = 5 (it's primary key and increments automatically)
and default hostname 'node-5'. (Here we have a problem with uniqueness.)

It will be strange if we refuse to create node with default name if
somebody has renamed another node using this name.

About nodes hostnames. Actually we can't refuse to use custom hostnames in
format 'node-{#}' because it is one of the main use cases. So we need to
find the solution which accepts such renaming.

2015-07-22 12:42 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com:

 Hi guys,

 @Sergii, it looks like you misunderstood something. `node-uuid` is not
 a general use case. It's only about conflicting nodes, and I'm sure
 everyone's going to change such a hostname in order to avoid
 confusion.

 @Andrew,

 a) Database refuses hostnames that break unique constraint, sot it'll
 work out-of-box.

 b) I like this idea. I think refusing `node-id` where `id` is not
 actually a node id is good idea. It solves our problem.

 Thanks,
 Igor

 On Wed, Jul 22, 2015 at 8:21 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  node-uuid is very terrible from UX perspective of view. Ask support
 people
  if they are comfortable to ssh such nodes or telling the name in phone
  conversation with customer. If we cannot validate FQDN of hostname I
 would
  slip this feature to next release where we can pay more attention to
  details.
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards,
Fedor Zhadaev
Junior Software Engineer, Mirantis Inc.
Skype: zhadaevfm
E-mail: fzhad...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] Module Sync for Murano and Sahara

2015-07-22 Thread Denis Egorenko
Hi Andrew!

Sahara already merged. All CI tests were succeeded, also was built custom
iso [1] and ran bvt tests [2], which also were succeeded and we got +1 from
QA team.
For Murano we will do the same: resolve all comments, build custom iso, run
custom bvt and wait +1 from Fuel CI and QA team.

[1]
http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_7.0_iso/562/

[2]
http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/7.0.custom.ubuntu.bvt_2/131/

2015-07-22 0:41 GMT+03:00 Andrew Woodward xar...@gmail.com:

 I was looped into reviewing the sync commits for Murano and Sahara. Both
 are in terrible shape and risk feature freeze at this point.

 We need feed back from the authors here. What is actually required for
 Kilo support (if any)from the Murano and Sahara modules? What will happen
 if these slip the release. What can you do to simplify the review scope.
 The most we can reasonably review is 500 LOC in any short time (and that's
 pushing it).

 Synopsis:
 murano [1] is -2, this can't be merged; there is a adapt commit with out
 any sync commit. The only way we will accept the fork method is a sync from
 upstream +adapt as documented in [2] also it's neigh impossible to review
 something this large with out the separation.
 -2 There is no upstream repo with content, so where did this even come
 from? We are/where the authority for murano at present so I'm baffled as to
 where this came from.

 Possible way through: A) Split sync from adapt, hopefully the adapt is
 small enough to to review. B)Make only changes necessary for kilo support.

 Sahara [3][4]
 This is a RED flah here, I'm not even sure to call it -1, -2 or something
 entirely else. I had with Serg M, This is a sync of upstream, plus the code
 on review from fuel that is not merged into puppet-sahara. I'm going to say
 that our fork is in much better shape at this moment, and we should just
 let it be. We shouldn't sync this until the upstream code is landed.

 Possible way through: C) The two outstanding commits inside the adapt
 commit need to be pulled out. They should be proposed right on top of the
 sync commit and should apply cleanly. I would prefer to see them as
 separate commits so they can be compared to the source more accurately.
 This should bring the adapt to something that could be reviewed. D) propose
 only the changes necessary to get kilo support.

 [1] https://review.openstack.org/#/c/203731/
 [2]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Adding_new_puppet_modules_to_fuel-library
 [3] https://review.openstack.org/#/c/202045
 [4] https://review.openstack.org/#/c/202195/
 --

 --

 Andrew Woodward

 Mirantis

 Fuel Community Ambassador

 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread John Belamaric


This implies an IP allocation request that passes something other than a 
network/port to the IPAM subsystem.


What I forgot to mention in my previous email is that proposal #2 is basically 
a feature we are planning for our custom IPAM driver (without the routing 
piece). We allow arbitrary tagging of subnets with meta-data in our backend. We 
plan to enable the user to utilize this information when making an IPAM request 
via a custom IPAM request type. However it would be a two step process - use 
the meta-data to find the network then use the network to get the IP.




As a (gulp) third alternative, we should consider that the front network here 
is in essence a layer 3 domain, and we have modeled layer 3 domains as address 
scopes in Liberty. The user is essentially saying give me an address that is 
routable in this scope - they don't care which actual subnet it gets allocated 
on. This is conceptually more in-line with [2] - modeling L3 domain separately 
from the existing Neutron concept of a network being a broadcast domain.

Again, the issue is that when you ask for an address you tend to have quite a 
strong opinion of what that address should be if it's location-specific.



An alternative that gives you more control than using an address scope would be 
to use a subnet pool. The more I think about this, it seems to me in that in 
this particular use case, all we are talking about is a grouping of subnets. 
This leads me to think tying it to subnet pools rather than to a network, 
because you can group subnets arbitrarily (though a given subnet can be in only 
one pool).

The issue is the often arbitrary and overloaded use of the word network. This 
has already been defined as an L2 broadcast domain so I am not sure it is a 
good idea to change that at this time.


Fundamentally, however we associate the segments together, this comes down to a 
scheduling problem.

It's not *solely* a scheduling problem, and that is my issue with this 
statement (Assaf has been saying the same).  You *can* solve this *exclusively* 
with scheduling (allocate the address up front, hope that the address has space 
for a VM with all its constraints met) - but that solution is horrible; or you 
can solve this largely with allocation where scheduling helps to deal with pool 
exchaustion, where it is mainly another sort of problem but scheduling plays a 
part.


Fair enough.


John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nailgun TypeValue error

2015-07-22 Thread Denis Egorenko
Hi colleagues!

I want to aware you about error in Nailgun [1]. I catched this in my patch
for Sahara [2] (patchset number 11, after resolving comment in 10). CI
tests were failed with error:

nailgun serializer failed TypeError: Value of settings:sahara.enabled.value
is undefined. Set options.strict to false to allow undefined value

As i described in bug, we have wrong option name for Sahara (and i suppose
for Murano too) in Nailgun. So, we need to rename those options.

[1] https://bugs.launchpad.net/fuel/+bug/1476953
[2] https://review.openstack.org/#/c/202195

-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Jenkins jobs are not executed when setting up a new CI system.

2015-07-22 Thread Tang Chen

Hi Abhishek,


On 07/22/2015 03:56 PM, Abhishek Shrivastava wrote:

Hi Tang,
Reboot your master vm and then try the same. Also after restarting 
check the status of zuul and zuul-merger.




I found the problem. zuul could not fetch repo because of the proxy..
And as a result, merge failed. So the job was not run.

Now it is OK. :)

Thanks. :)

On Wed, Jul 22, 2015 at 12:14 PM, Tang Chen tangc...@cn.fujitsu.com 
mailto:tangc...@cn.fujitsu.com wrote:



On 07/22/2015 12:49 PM, Tang Chen wrote:

Hi all,

When I send a patch to gerrit, my zuul is notified, but jenkins
jobs are not run.

My CI always reports the following error:

Merge Failed.

This change was unable to be automatically merged with the current state of 
the repository. Please rebase your change and upload a new patchset.

I think, because the patch cannot be merged, the jobs are not run.

Referring tohttps://www.mediawiki.org/wiki/Gerrit/Advanced_usage,I did 
update my master branch and make sure it is up-to-date. But it doesn't work.And 
other CIs from other companies didn't report this error.


process_event_queue()
 |-- pipeline.manager.addChange()
 |-- report Unable to find change queue for change
Change 0x7fa7ef8b6250 204446,1 in project openstack-dev/sandbox

204446 is my patch number.

Anyone knows why is that ?

Thanks.





And also, when zuul tries to get the patch from gerrit, it executes:

gerrit query --format json --all-approvals --comments --commit-message 
--current-patch-set --dependencies --files --patch-sets --submit-records 204337


When I try to execute it myself, it reports:Permission denied 
(publickey).

I updated my ssh key, and uploaded the new public key to gerrit, but it 
doesn't work.


Does anyone have any idea what's going on here ?

Thanks.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*
*
*Thanks  Regards,
*
*Abhishek*
/_Cloudbyte Inc. http://www.cloudbyte.com_/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Hiera nodes issue for 6.1 plugins

2015-07-22 Thread Aleksandr Didenko
Hi,

I think we should just fix the bug to make nodes.yaml match with the data
in astute.yaml. Because 'nodes' Hiera key is also used for /etc/hosts
update. I've raised bug priority to high.

Regards,
Alex

On Wed, Jul 22, 2015 at 2:42 PM, Irina Povolotskaya 
ipovolotsk...@mirantis.com wrote:

 Hi to all,

 Swann Croiset reported a bug on Hiera nodes [1].
 This issue affects several plugins so far.

 In 6.1, there is no workaround.
 In 7.0,  there should be a new structure for networks_metadata;
 this means, there will be ready-to-go puppet functions to get the data.

 Unfortunately, it impacts UX greatly.

 Thanks.


 [1] https://bugs.launchpad.net/fuel/+bug/1476957

 --
 Best regards,

 Irina

 *Business Analyst*

 *Partner Enablement*

 *skype: ira_live*







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Restore OSD devices with Puppet Ceph module

2015-07-22 Thread Oleg Gelbukh
Greetings,

While working on upgrade of OpenStack with Fuel installer, I meet a
requirement to re-add OSD devices with the existing data set to a Ceph
cluster using Puppet module. Node is reinstalled during the upgrade, thus
disks used for OSDs are not mounted at Puppet runtime.

Current version of Ceph module in fuel-library only supports addition of
new OSD devices. Mounted devices are skipped. Not mounted devices with Ceph
UUID in GPT label are passed to 'ceph-deploy osd prepare' command that
formats the device, recreates file system and all existing data is lost.

I proposed a patch to allow support for OSD devices with existing data set:
https://review.openstack.org/#/c/203639/2

However, this fix is very straightforward and doesn't account for different
corner cases, as was pointed out by Mykola Golub in review. As this problem
seems rather significant to me, I'd like to bring this discussion to the
broader audience.

So, here's the comment with my replies inline:

I am not sure just reactivating disks that have a filesystem is a safe
approach:

1) If you are deploying a mix of new and restored disks you may end up with
confiicting OSDs joining the cluster with the same ID. 2) It makes sense to
restore OSDs only if a monitor (cluster) is restored, otherwise activation
of old OSDs will fail. 3) It might happen that the partition contains a
valid filesystem by accident (e.g. the user reused disk/hosts from another
cluster) -- it will not join the cluster because wrong fsid and credentials
but the deployment will unexpectedly fail.

1) As far as I can tell, OSD device IDs are assgined by Ceph cluster based
on already existing devices. So, if some ID is stored on the device, either
device with the given ID already exists in the cluster and no other new
device will the same ID, or cluster doesn't know about a device with the
given ID, and that means we already lost the data placement before.
2) This can be fixed by adding a check that ensures that fsid parameter in
ceph.conf on the node and cluster-fsid on the device are equal. Otherwise
the device is treated like a new device, i.e. passed to 'ceph-deploy osd
prepare'.
3) This situation would be covered by previous check, in my understanding.

Is it posible to pass information that the cluster is restored using
partition preservation? Becasue I think a much safer approach is:

1) Pass some flag from the user that we are restoring the cluster 2)
Restore controller (monitor) and abort deployment if it fails. 3) When
deploying osd host, if 'restore' flag is present, skip prepare step and try
only activate for all disks if possible (we might want to ignore activate
error, and continue with other disks so we restore osds as many as possible)

The case I want to support by this change is not restoration of the whole
cluster, but rather support for reinstallation of OSD node's operating
system. For this case, the approach you propose seems actually more correct
than my implementation. For node being reinstalled we do not expect new
devices, but only ones with the existing data set, so we don't need to
specifically check for it, but rather just skip prepare for all devices.

We still need to check that the value of fsid on the disk is consistent
with the cluster's fsid.

Which issues should we anticipate with this kind of approach?

Another question that is still unclear to me is if someone really needs
support for a hybrid use case when the new and existing unmounted OSD
devices are mixed in one OSD node?

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-22 Thread Hayes, Graham
Stackforge is being deprecated.

On 22/07/15 02:18, Fox, Kevin M wrote:
 Why not stackforge?
 
 Thanks,
 Kevin *
  
 *
 
 *From:* Hayes, Graham
 *Sent:* Tuesday, July 21, 2015 11:53:35 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos
 and more!
 
 Hi All,
 
 I have created a github org and 2 repos for us to get started in.
 
 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.
 
 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.
 
 Core have read/write access to the repos, and admin can add / remove
 projects.
 
 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org
 
 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:
 
 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project
 
 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB
 
 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?
 
 Thanks,
 
 Graham
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Hiera nodes issue for 6.1 plugins

2015-07-22 Thread Irina Povolotskaya
Hi to all,

Swann Croiset reported a bug on Hiera nodes [1].
This issue affects several plugins so far.

In 6.1, there is no workaround.
In 7.0,  there should be a new structure for networks_metadata;
this means, there will be ready-to-go puppet functions to get the data.

Unfortunately, it impacts UX greatly.

Thanks.


[1] https://bugs.launchpad.net/fuel/+bug/1476957

-- 
Best regards,

Irina

*Business Analyst*

*Partner Enablement*

*skype: ira_live*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] CI Infrastructure down

2015-07-22 Thread Andreas Jaeger
This morning we hit some problems with our CI infrastructure, the infra 
team is still investigating and trying to fix it.


You can upload changes and comment on them but many jobs will fail with 
NOT_REGISTERED.


Do not recheck these until the infrastructure is fixed again.

Also, there's no sense in approving changes currently, they might be hit 
by the same NOT_REGISTERED jobs or fail in the post jobs.


So, happy coding, reviewing - and local testing for now,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the Core Reviewer team for Kolla

2015-07-22 Thread Adrian Otto
Steve,

I can speak on behalf of the Magnum team that we are happy to have more of 
Daneyon’s attention. Tackling our network features is very important to all of 
us. I’m confident that our Kolla team will manage to fill the void that Daneyon 
leaves there. I believe in Kolla, and want it to remain on a path to success. 
If there is anything we can do to help our respective teams work well together, 
please let us know.

Regards,

Adrian


On Jul 22, 2015, at 1:47 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Fellow Kolla developers,

Daneyon has been instrumental in getting Kolla rolling and keeping our project 
alive.  He even found me a new job that would pay my mortgage and Panamera 
payment so I could continue performing as PTL for Kolla and get Magnum off the 
ground.  But Daneyon has conferred with me that he has a personal objective of 
getting highly involved in the Magnum project and leading the container 
networking initiative coming out of Magnum.  For a sample of his new personal 
mission:

https://review.openstack.org/#/c/204686/

I’m a bit sad to lose Daneyon to Magnum, but life is short and not sweet 
enough.  I personally feel people should do what makes them satisfied and happy 
professionally.  Daneyon will still be present at the Kolla midcycle and 
contribute to our talk (if selected by the community) in Tokyo.  I expect 
Daneyon will make a big impact in Magnum, just as he has with Kolla.

In the future if Daneyon decides he wishes to re-engage with the Kolla project, 
we will welcome him with open arms because Daneyon rocks and does super high 
quality work.

NB Typically we would vote on removal of a core reviewer, unless they wish to 
be removed to focus on on other projects.  Since that is the case here, there 
is no vote necessary.

Please wish Daneyon well in his adventures in Magnum territory and prey he 
comes back when he finishes the job on Magnum networking :)

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread Kevin Benton
It seems to me that the existence of the multiprovider extension is
an important point for this discussion.  Multiprovider, as I understand
it, allows describing a network composed of multiple L2 segments
with implicit east-west routing between them.

Not quite. Multiprovider is to describe different L2 segments that are all
connected together via some kind of translator (e.g. l2 gateway) that
ensures they are all part of the same broadcast domain. It doesn't fit with
what we are looking to achieve because all of the segments are supposed to
be part of the same broadcast domain so there is only one DHCP instance for
all segments, subnets are shared between all of them, a router only
attaches to one, etc.

But I was suggesting that there might be some scenarios where routing
happened without a corresponding router object, and in fact it appears
that this is already the case: between the segments of a multiprovider
network.

There isn't routing between different subnets for multiprovider networks. A
multiprovider network is just an l2 broadcast domain realized on multiple
encapsulation mediums.

I think Ian's posts have now clarified this.  The current order of
events is:
- Neutron creates an unbound port, associated with a network, and
allocates an IP address for it.
- Nova chooses a host for the new VM.
- The port is bound and plugged on that host.

This is only the case if someone passes a port UUID into Nova to use. If
Nova creates the port itself, it will wait until it's scheduled to a
compute node so the port creation request should already have the host_id
field set.

Supporting the case a pre-created port and migration will be the issue. We
would essentially need to allow ports to be re-assigned to different
networks to handle these scenarios.

Cheers,
Kevin Benton
On 21/07/15 15:45, Salvatore Orlando wrote:
 On 21 July 2015 at 14:21, Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:


 You've probably seen Robert Kukura's comment on the related bug at
 https://bugs.launchpad.net/neutron/+bug/1458890/comments/30, and there
 is a useful detailed description of how the multiprovider extension
 works at
 https://bugs.launchpad.net/openstack-api-site/+bug/1242019/comments/3.
 I believe it is correct to say that using multiprovider would be an
 effective substitute for using multiple backing networks with
 different
 {network_type, physical_network, segmentation_id}, and that logically
 multiprovider is aiming to describe the same thing as this email
 thread
 is, i.e. non-overlay mapping onto a physical network composed of
 multiple segments.


 However, I believe multiprovider does not (per se) address the IP
 addressing requirement(s) of the multi-segment scenario.


 Indeed it does not. The multiprovider extension simply indicates that
 a network can be built using different L2 segments.
 It is then up to the operator to ensure that these segments are
 correct, and it's up to whatever is running in the backend to ensure
 that instances on the various segments can communicate each other.

It seems to me that the existence of the multiprovider extension is an
important point for this discussion.  Multiprovider, as I understand it,
allows describing a network composed of multiple L2 segments with
implicit east-west routing between them.  Which is a large part
(although not all) of the requirement that the current discussion is
trying to meet.  Given that multiprovider already exists - and assuming
that it is generally accepted and approved of - surely we should not add
a competing model for the same thing, but should instead look at adding
any additional features to multiprovider that are needed to meet the
complete set of requirements?


 I believe the ask here is for Neutron to provide this capability (the
 neutron reference control plane currently doesn't).

What exactly do you mean here?  I don't think the operators are asking
for a software implementation of the implicit east-west routing, for
example, because they already have physical kit that does that.
(Although perhaps that kit might need some kind of instruction.)  For
the DHCP function (as I think you've written elsewhere) all that's
needed is to run a DHCP agent in each segment.

I don't know if the operators were aware of multiprovider - AFAIK, it
wasn't mentioned as a possible ingredient for this work, before Robert
Kukura's comment cited above - but if they were, I think they might just
be asking for the IP addressing points on top of that; both
segment-based and mobile.


 It is not yet entirely clear to me whether there's a real need of
 changing the logical model, but IP addressing implications might be a
 reason, as pointed out by Neil.



 
  This proposal offers a clear separation between the statically bound
  and the mobile address blocks by associating the former with the
  backing networks and the latter with the front 

Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread Kevin Benton
Ultimately, we need to match up the host scheduled by Nova to the
addresses available to that host.  We could do this by delaying
address assignment until after host binding or we could do it by
including segment information from Neutron during scheduling.  The
latter has the advantage that we can consider IP availability during
scheduling.  That is why GoDaddy implemented it that way.

Including segment information to Nova only solves the issue of ruling out
hosts that don't have access to the network that the port is bound to. We
still have to solve the issue of selecting the network for a given host in
the first place. Scheduling filters don't choose things like which resource
to use, they only choose which hosts can satisfy the constraints of the
chosen resources.

I proposed the port scheduling RFE to deal with the part about selecting a
network that is appropriate for the port based on provided hints and
host_id. [1]

the neutron network might have been conceived as being just a broadcast
domain but, in practice, it is L2 and L3.

I disagree with this and I think we need to be very clear on what our API
constructs mean. If we don't, we will have constant proposals to smear the
boundaries between things, which is sort of what we are running into
already.

Today I can create a standalone network and attach ports to it. That
network is just an L2 broadcast domain and has no IP addresses or any L3
info associated with it, but the ports can communicate via L2. The network
doesn't know anything about the l3 addresses and just forwards the traffic
according to L2 semantics.

The neutron subnet provides L3 addressing info that can be associated with
an arbitrary neutron network. To route between subnets, we attach routers
to subnets. It doesn't matter if those subnets are on the same or different
networks, because it's L3 and it doesn't matter.

It is conceivable that we could remove the requirement for a subnet to have
an underlying network in the fully-routed case. However, that would mean we
would need to remove the requirement for a port to have a network as well
(unless this is only used for floating IPs).

1. https://bugs.launchpad.net/neutron/+bug/1469668
On Tue, Jul 21, 2015 at 1:11 PM, John Belamaric jbelama...@infoblox.com
wrote:
 Wow, a lot to digest in these threads. If I can summarize my understanding
 of the two proposals. Let me know whether I get this right. There are a
 couple problems that need to be solved:

  a. Scheduling based on host reachability to the segments
  b. Floating IP functionality across the segments. I am not sure I am
clear
 on this one but it sounds like you want the routers attached to the
segments
 to advertise routes to the specific floating IPs. Presumably then they
would
 do NAT or the instance would assign both the fixed IP and the floating IP
to
 its interface?

 In Proposal 1, (a) is solved by associating segments to the front network
 via a router - that association is used to provide a single hook into the
 existing API that limits the scope of segment selection to those
associated
 with the front network. (b) is solved by tying the floating IP ranges to
the
 same front network and managing the reachability with dynamic routing.

 In Proposal 2, (a) is solved by tagging each network with some meta-data
 that the IPAM system uses to make a selection. This implies an IP
allocation
 request that passes something other than a network/port to the IPAM
 subsystem. This fine from the IPAM point of view but there is no
 corresponding API for this right now. To solve (b) either the IPAM system
 has to publish the routes or the higher level management has to ALSO be
 aware of the mappings (rather than just IPAM).

John, from your summary above, you seem to have the best understanding
of the whole of what I was weakly attempting to communicate.  Thank
you for summarizing.

 To throw some fuel on the fire, I would argue also that (a) is not
 sufficient and address availability needs to be considered as well (as
 described in [1]). Selecting a host based on reachability alone will fail
 when addresses are exhausted. Similarly, with (b) I think there needs to
be
 consideration during association of a floating IP to the effect on
routing.
 That is, rather than a huge number of host routes it would be ideal to
 allocate the floating IPs in blocks that can be associated with the
backing
 networks (though we would want to be able to split these blocks as small
as
 a /32 if necessary - but avoid it/optimize as much as possible).

Yes, address availability is a factor and must be considered in either
case.  My email was getting long already and I thought that could be
considered separately since I believe it applies regardless of the
outcome of this thread.  But, since it seems to be an essential part
of this conversation, let me say something about it.

Ultimately, we need to match up the host scheduled by Nova to the
addresses available to that host.  We could do this by 

Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread Kevin Benton
The issue with the availability zone solution is that we now force
availability zones in Nova to be constrained to network configuration. In
the L3 ToR/no overlay configuration, this means every rack is its own
availability zone. This is pretty annoying for users to deal with because
they have to choose from potentially hundreds of availability zones and it
rules out making AZs based on other things (e.g.  current phase, cooling
systems, etc).

I may be misunderstanding and you could be suggesting to not expose this
availability zone to the end user and only make it available to the
scheduler. However, this defeats one of the purposes of availability zones
which is to let users select different AZs to spread their instances across
failure domains.
On Jul 22, 2015 2:41 PM, Assaf Muller amul...@redhat.com wrote:

 I added a summary of my thoughts about the enhancements I think we could
 make to the Nova scheduler in order to better support the Neutron provider
 networks use case.

 - Original Message -
  On Tue, Jul 21, 2015 at 1:11 PM, John Belamaric jbelama...@infoblox.com
 
  wrote:
   Wow, a lot to digest in these threads. If I can summarize my
 understanding
   of the two proposals. Let me know whether I get this right. There are a
   couple problems that need to be solved:
  
a. Scheduling based on host reachability to the segments
b. Floating IP functionality across the segments. I am not sure I am
 clear
   on this one but it sounds like you want the routers attached to the
   segments
   to advertise routes to the specific floating IPs. Presumably then they
   would
   do NAT or the instance would assign both the fixed IP and the floating
 IP
   to
   its interface?
  
   In Proposal 1, (a) is solved by associating segments to the front
 network
   via a router - that association is used to provide a single hook into
 the
   existing API that limits the scope of segment selection to those
 associated
   with the front network. (b) is solved by tying the floating IP ranges
 to
   the
   same front network and managing the reachability with dynamic routing.
  
   In Proposal 2, (a) is solved by tagging each network with some
 meta-data
   that the IPAM system uses to make a selection. This implies an IP
   allocation
   request that passes something other than a network/port to the IPAM
   subsystem. This fine from the IPAM point of view but there is no
   corresponding API for this right now. To solve (b) either the IPAM
 system
   has to publish the routes or the higher level management has to ALSO be
   aware of the mappings (rather than just IPAM).
 
  John, from your summary above, you seem to have the best understanding
  of the whole of what I was weakly attempting to communicate.  Thank
  you for summarizing.
 
   To throw some fuel on the fire, I would argue also that (a) is not
   sufficient and address availability needs to be considered as well (as
   described in [1]). Selecting a host based on reachability alone will
 fail
   when addresses are exhausted. Similarly, with (b) I think there needs
 to be
   consideration during association of a floating IP to the effect on
 routing.
   That is, rather than a huge number of host routes it would be ideal to
   allocate the floating IPs in blocks that can be associated with the
 backing
   networks (though we would want to be able to split these blocks as
 small as
   a /32 if necessary - but avoid it/optimize as much as possible).
 
  Yes, address availability is a factor and must be considered in either
  case.  My email was getting long already and I thought that could be
  considered separately since I believe it applies regardless of the
  outcome of this thread.  But, since it seems to be an essential part
  of this conversation, let me say something about it.
 
  Ultimately, we need to match up the host scheduled by Nova to the
  addresses available to that host.  We could do this by delaying
  address assignment until after host binding or we could do it by
  including segment information from Neutron during scheduling.  The
  latter has the advantage that we can consider IP availability during
  scheduling.  That is why GoDaddy implemented it that way.
 
   In fact, I think that these proposals are more or less the same - it's
 just
   in #1 the meta-data used to tie the backing networks together is
 another
   network. This allows it to fit in neatly with the existing APIs. You
 would
   still need to implement something prior to IPAM or within IPAM that
 would
   select the appropriate backing network.
 
  They are similar but to say they're the same is going a bit too far.
  If they were the same then we'd be done with this conversation.  ;)
 
   As a (gulp) third alternative, we should consider that the front
 network
   here is in essence a layer 3 domain, and we have modeled layer 3
 domains as
   address scopes in Liberty. The user is essentially saying give me an
   address that is routable in this scope - 

Re: [openstack-dev] [keystone] token revocation woes

2015-07-22 Thread Adam Young

On 07/22/2015 03:41 PM, Morgan Fainberg wrote:
This is an indicator that the bottleneck is not the db strictly 
speaking, but also related to the way we match. This means we need to 
spend some serious cycles on improving both the stored record(s) for 
revocation events and the matching algorithm.


The simplest approach to revocation checking is to do a linear search 
through the events.  I think the old version of the code that did that 
is in a code review, and I will pull it out.


If we remove the tree, then the matching will have to run through each 
of the records and see if there is a match;  the test will be linear 
with the number of records (slightly shorter if a token is actually 
revoked).








Sent via mobile

On Jul 22, 2015, at 11:51, Matt Fischer m...@mattfischer.com 
mailto:m...@mattfischer.com wrote:



Dolph,

Per our IRC discussion, I was unable to see any performance 
improvement here although not calling DELETE so often will reduce the 
number of deadlocks when we're under heavy load especially given the 
globally replicated DB we use.




On Tue, Jul 21, 2015 at 5:26 PM, Dolph Mathews 
dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:


Well, you might be in luck! Morgan Fainberg actually implemented
an improvement that was apparently documented by Adam Young way
back in March:

https://bugs.launchpad.net/keystone/+bug/1287757

There's a link to the stable/kilo backport in comment #2 - I'd be
eager to hear how it performs for you!

On Tue, Jul 21, 2015 at 5:58 PM, Matt Fischer
m...@mattfischer.com mailto:m...@mattfischer.com wrote:

Dolph,

Excuse the delayed reply, was waiting for a brilliant
solution from someone. Without one, personally I'd prefer the
cronjob as it seems to be the type of thing cron was designed
for. That will be a painful change as people now rely on this
behavior so I don't know if its feasible. I will be setting
up monitoring for the revocation count and alerting me if it
crosses probably 500 or so. If the problem gets worse then I
think a custom no-op or sql driver is the next step.

Thanks.


On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews
dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:



On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer
m...@mattfischer.com mailto:m...@mattfischer.com wrote:

I'm having some issues with keystone revocation
events. The bottom line is that due to the way
keystone handles the clean-up of these events[1],
having more than a few leads to:

 - bad performance, up to 2x slower token validation
with about 600 events based on my perf measurements.
 - database deadlocks, which cause API calls to fail,
more likely with more events it seems

I am seeing this behavior in code from trunk on June
11 using Fernet tokens, but the token backend does
not seem to make a difference.

Here's what happens to the db in terms of deadlock:
2015-07-15 21:25:41.082 31800 TRACE
keystone.common.wsgi DBDeadlock: (OperationalError)
(1213, 'Deadlock found when trying to get lock; try
restarting transaction') 'DELETE FROM
revocation_event WHERE revocation_event.revoked_at 
%s' (datetime.datetime(2015, 7, 15, 18, 55, 41, 55186),)

When this starts happening, I just go truncate the
table, but this is not ideal. If [1] is really true
then the design is not great, it sounds like keystone
is doing a revocation event clean-up on every token
validation call. Reading and deleting/locking from my
db cluster is not something I want to do on every
validate call.


Unfortunately, that's *exactly* what keystone is doing.
Adam and I had a conversation about this problem in
Vancouver which directly resulted in opening the bug
referenced on the operator list:

https://bugs.launchpad.net/keystone/+bug/1456797

Neither of us remembered the actual implemented behavior,
which is what you've run into and Deepti verified in the
bug's comments.


So, can I turn of token revocation for now? I didn't
see an obvious no-op driver.


Not sure how, other than writing your own no-op driver,
or perhaps an extended driver that doesn't try to clean
the table on every read?

And in the long-run can this be fixed? I'd rather do
almost anything else, including writing a cronjob
than what happens now.


If 

Re: [openstack-dev] [TripleO] Moving instack upstream

2015-07-22 Thread Derek Higgins

On 22/07/15 18:41, Gregory Haynes wrote:

Excerpts from Derek Higgins's message of 2015-07-21 19:29:49 +:

Hi All,
 Something we discussed at the summit was to switch the focus of
tripleo's deployment method to deploy using instack using images built
with tripleo-puppet-elements. Up to now all the instack work has been
done downstream of tripleo as part of rdo. Having parts of our
deployment story outside of upstream gives us problems mainly because it
becomes very difficult to CI what we expect deployers to use while we're
developing the upstream parts.

Essentially what I'm talking about here is pulling instack-undercloud
upstream along with a few of its dependency projects (instack,
tripleo-common, tuskar-ui-extras etc..) into tripleo and using them in
our CI in place of devtest.

Getting our CI working with instack is close to working but has taken
longer then I expected because of various complications and distractions
but I hope to have something over the next few days that we can use to
replace devtest in CI, in a lot of ways this will start out by taking a
step backwards but we should finish up in a better place where we will
be developing (and running CI on) what we expect deployers to use.

Once I have something that works I think it makes sense to drop the jobs
undercloud-precise-nonha and overcloud-precise-nonha, while switching
overcloud-f21-nonha to use instack, this has a few effects that need to
be called out

1. We will no longer be running CI on (and as a result not supporting)
most of the the bash based elements
2. We will no longer be running CI on (and as a result not supporting)
ubuntu


I'd like to point out that this means DIB will no longer have an image
booting test for Ubuntu. I have created a review[1] to try and get some
coverage of this in a dib speific test, hopefully we can get it merged
before we remove the tripleo ubuntu tests?


I should have mentioned this, the plan we discussed to cover this case 
was that the nonha test would build and boot an ubuntu based user image 
on the deployed cloud. I like the look of the test you proposed, I'll 
give it a proper review in the morning, which ever we end up using I 
agree we should continue to test that DIB can create a bootable Ubuntu 
image.






Should anybody come along in the future interested in either of these
things (and prepared to put the time in) we can pick them back up again.
In fact the move to puppet element based images should mean we can more
easily add in extra distros in the future.

3. While we find our feet we should remove all tripleo-ci jobs from non
tripleo projects, once we're confident with it we can explore adding our
jobs back into other projects again


I assume DIB will be keeping the tripleo jobs for now?
Yup, we should still be running tripleo tests on DIB although I don't 
believe we need to run every tripleo test on it.






Nothing has changed yet, I order to check we're all on the same page
this is high level details of how I see things should proceed so shout
now if I got anything wrong or you disagree.

Sorry for not sending this out sooner for those of you who weren't at
the summit,
Derek.



-Greg

[1] https://review.openstack.org/#/c/204639/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [VXLAN] patch to use per-VNI multicast group addresses

2015-07-22 Thread John Nielsen
Thanks for the guidance. I put the patch on gerrit:
https://review.openstack.org/#/c/204725/

JN

 On Jul 21, 2015, at 4:59 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 
 It is useful, yes; and posting diffs on the mailing list is not the way to 
 get them reviewed and approved.  If you can get this on gerrit it will get a 
 proper review, and I would certainly like to see something like this 
 incorporated.
 
 On 21 July 2015 at 15:41, John Nielsen li...@jnielsen.net wrote:
 I may be in a small minority since I a) use VXLAN, b) don’t hate multicast 
 and c) use linuxbridge instead of OVS. However I thought I’d share this 
 patch in case I’m not alone.
 
 If you assume the use of multicast, VXLAN works quite nicely to isolate L2 
 domains AND to prevent delivery of unwanted broadcast/unknown/multicast 
 packets to VTEPs that don’t need them. However, the latter only holds up if 
 each VXLAN VNI uses its own unique multicast group address. Currently, you 
 have to either disable multicast (and use l2_population or similar) or use 
 only a single group address for ALL VNIs (and force every single VTEP to 
 receive every BUM packet from every network). For my usage, this patch seems 
 simpler.
 
 Feedback is very welcome. In particular I’d like to know if anyone else 
 finds this useful and if so, what (if any) changes might be required to get 
 it committed. Thanks!
 
 JN
 
 
 commit 17c32a9ad07911f3b4148e96cbcae88720eef322
 Author: John Nielsen j...@jnielsen.net
 Date:   Tue Jul 21 16:13:42 2015 -0600
 
 Add a boolean option, vxlan_group_auto, which if enabled will compute
 a unique multicast group address group for each VXLAN VNI. Since VNIs
 are 24 bits, they map nicely to the 239.0.0.0/8 site-local multicast
 range. Eight bits of the VNI are used for the second, third and fourth
 octets (with 239 always as the first octet).
 
 Using this option allows VTEPs to receive BUM datagrams via multicast,
 but only for those VNIs in which they participate. In other words, it is
 an alternative to the l2_population extension and driver for environments
 where both multicast and linuxbridge are used.
 
 If the option is True then multicast groups are computed as described
 above. If the option is False then the previous behavior is used
 (either a single multicast group is defined by vxlan_group or multicast
 is disabled).
 
 diff --git a/etc/neutron/plugins/ml2/linuxbridge_agent.ini 
 b/etc/neutron/plugins/ml2/linuxbridge_agent.ini
 index d1a01ba..03578ad 100644
 --- a/etc/neutron/plugins/ml2/linuxbridge_agent.ini
 +++ b/etc/neutron/plugins/ml2/linuxbridge_agent.ini
 @@ -25,6 +25,10 @@
  # This group must be the same on all the agents.
  # vxlan_group = 224.0.0.1
  #
 +# (BoolOpt) Derive a unique 239.x.x.x multicast group for each vxlan VNI.
 +# If this option is true, the setting of vxlan_group is ignored.
 +# vxlan_group_auto = False
 +#
  # (StrOpt) Local IP address to use for VXLAN endpoints (required)
  # local_ip =
  #
 diff --git a/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py 
 b/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
 index 6f15236..b4805d5 100644
 --- a/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
 +++ b/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py
 @@ -31,6 +31,9 @@ vxlan_opts = [
 help=_(TOS for vxlan interface protocol packets.)),
  cfg.StrOpt('vxlan_group', default=DEFAULT_VXLAN_GROUP,
 help=_(Multicast group for vxlan interface.)),
 +cfg.BoolOpt('vxlan_group_auto', default=False,
 +help=_(Derive a unique 239.x.x.x multicast group for each 
 +   vxlan VNI)),
  cfg.IPOpt('local_ip', version=4,
help=_(Local IP address of the VXLAN endpoints.)),
  cfg.BoolOpt('l2_population', default=False,
 diff --git 
 a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py 
 b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
 index 61627eb..a0efde1 100644
 --- 
 a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
 +++ 
 b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
 @@ -127,6 +127,14 @@ class LinuxBridgeManager(object):
  LOG.warning(_LW(Invalid Segmentation ID: %s, will lead to 
  incorrect vxlan device name), segmentation_id)
 
 +def get_vxlan_group(self, segmentation_id):
 +if cfg.CONF.VXLAN.vxlan_group_auto:
 +return (239. +
 +str(segmentation_id  16) + . +
 +str(segmentation_id  8 % 256) + . +
 +str(segmentation_id % 256))
 +return cfg.CONF.VXLAN.vxlan_group
 +
  def get_all_neutron_bridges(self):
  neutron_bridge_list = []
  bridge_list = os.listdir(BRIDGE_FS)
 @@ -240,7 +248,7 @@ class LinuxBridgeManager(object):
  

[openstack-dev] [fuel] [FFE] FF Exception request for Env Upgrade feature

2015-07-22 Thread Oleg Gelbukh
Team,

I would like to request an exception from the Feature Freeze for
Environment Upgrade extensions added to the Nailgun API [1]. The Nailgun
side of the feature is implemented in the following CRs:

https://review.openstack.org/#/q/status:open+topic:bp/nailgun-api-env-upgrade-extensions,n,z

These changes are implemented as an extension [2] to the Nailgun. It barely
touches the core code and doesn't change the existing functionality.

Please, respond if you have any questions or concerns related to this
request.

Thanks in advance.

[1] https://review.openstack.org/#/c/192551/
[2] https://review.openstack.org/#/q/topic:bp/volume-manager-refactoring,n,z

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Changbin Liu
Thanks, Paul and Clay.

By deleted one data fragment I meant I rm only the data file. I did not
delete the hashes.pkl file in the outer directory.

I tried it again. This time deleting both the data file and the hashes.pkl
file. The reconstructor is able to restore the data file correctly.

But now I wonder: is it by design that EC does not handle an accidental
deletion of just the data file? Deleting both data file and hashes.pkl file
is more like a deliberately-created failure case instead of a normal one.
To me Swift EC repairing seems different from the triple-replication mode,
where you delete any data file copy, it will be restored.



Thanks

Changbin

On Tue, Jul 21, 2015 at 5:28 PM, Luse, Paul E paul.e.l...@intel.com wrote:

  I was about to ask that very same thing and, at the same time, if you
 can indicate if you’ve seen errors in any logs and if so please provide
 those as well.  I’m hoping you just didn’t delete the hashes.pkl file
 though J



 -Paul



 *From:* Clay Gerrard [mailto:clay.gerr...@gmail.com]
 *Sent:* Tuesday, July 21, 2015 2:22 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Openstack] [Swift] Erasure coding
 reconstructor doesn't work



 How did you deleted one data fragment?



 Like replication the EC consistency engine uses some sub directory hashing
 to accelerate replication requests in a consistent system - so if you just
 rm a file down in an hashdir somewhere you also need to delete the
 hashes.pkl up in the part dir (or call the invalidate_hash method like PUT,
 DELETE, POST, and quarantine do)



 Every so often someone discusses the idea of having the auditor invalidate
 a hash after long enough or take some action on empty hashdirs (mind the
 races!) - but its really only an issue when someone delete's something by
 hand so we normally manage to get distracted with other things.



 -Clay



 On Tue, Jul 21, 2015 at 1:38 PM, Changbin Liu changbin@gmail.com
 wrote:

 Folks,



 To test the latest feature of Swift erasure coding, I followed this
 document (
 http://docs.openstack.org/developer/swift/overview_erasure_code.html) to
 deploy a simple cluster. I used Swift 2.3.0.



 I am glad that operations like object PUT/GET/DELETE worked fine. I can
 see that objects were correctly encoded/uploaded and downloaded at proxy
 and object servers.



 However, I noticed that swift-object-reconstructor seemed don't work as
 expected. Here is my setup: my cluster has three object servers, and I use
 this policy:



 [storage-policy:1]

 policy_type = erasure_coding

 name = jerasure-rs-vand-2-1

 ec_type = jerasure_rs_vand

 ec_num_data_fragments = 2

 ec_num_parity_fragments = 1

 ec_object_segment_size = 1048576


   After I uploaded one object, I verified that: there was one data
 fragment on each of two object servers, and one parity fragment on the
 third object server. However, when I deleted one data fragment, no matter
 how long I waited, it never got repaired, i.e., the deleted data fragment
 was never regenerated by the swift-object-reconstructor process.



 My question: is swift-object-reconstructor supposed to be NOT WORKING
 given the current implementation status? Or, is there any configuration I
 missed in setting up swift-object-reconstructor?



 Thanks



 Changbin


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Luse, Paul E
Correct, it by design.  Swift doesn’t expect people to delete things “under the 
covers”.  When the auditor finds a corrupted file, it’s the one that quantities 
it and knows that it also needs to invalidate the hashes.pkl file.  This 
mechanism is there to minimize extra ‘stuff’ going on both at the node and on 
the cluster when it comes to making sure there is durability in the system.

Wrt why the replication code seems to work if you delete just a .data (again, 
you shouldn’t do this as files don’t just disappear, the intention is that the 
auditor is in charge here) is because of some code in the replicator that I 
didn’t ‘mimic’ in the reconstructor and it doesn’t look like clay did either 
when he worked on it.  Not really sure it was there – forces a listing every 10 
passes for some reason.  Clay? (see do_listdir in update() in the replciator)

Thx
Paul

From: Changbin Liu [mailto:changbin@gmail.com]
Sent: Wednesday, July 22, 2015 12:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor 
doesn't work

Thanks, Paul and Clay.

By deleted one data fragment I meant I rm only the data file. I did not 
delete the hashes.pkl file in the outer directory.

I tried it again. This time deleting both the data file and the hashes.pkl 
file. The reconstructor is able to restore the data file correctly.

But now I wonder: is it by design that EC does not handle an accidental 
deletion of just the data file? Deleting both data file and hashes.pkl file is 
more like a deliberately-created failure case instead of a normal one.  To me 
Swift EC repairing seems different from the triple-replication mode, where you 
delete any data file copy, it will be restored.



Thanks

Changbin

On Tue, Jul 21, 2015 at 5:28 PM, Luse, Paul E 
paul.e.l...@intel.commailto:paul.e.l...@intel.com wrote:
I was about to ask that very same thing and, at the same time, if you can 
indicate if you’ve seen errors in any logs and if so please provide those as 
well.  I’m hoping you just didn’t delete the hashes.pkl file though ☺

-Paul

From: Clay Gerrard 
[mailto:clay.gerr...@gmail.commailto:clay.gerr...@gmail.com]
Sent: Tuesday, July 21, 2015 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor 
doesn't work

How did you deleted one data fragment?

Like replication the EC consistency engine uses some sub directory hashing to 
accelerate replication requests in a consistent system - so if you just rm a 
file down in an hashdir somewhere you also need to delete the hashes.pkl up in 
the part dir (or call the invalidate_hash method like PUT, DELETE, POST, and 
quarantine do)

Every so often someone discusses the idea of having the auditor invalidate a 
hash after long enough or take some action on empty hashdirs (mind the 
races!) - but its really only an issue when someone delete's something by hand 
so we normally manage to get distracted with other things.

-Clay

On Tue, Jul 21, 2015 at 1:38 PM, Changbin Liu 
changbin@gmail.commailto:changbin@gmail.com wrote:
Folks,

To test the latest feature of Swift erasure coding, I followed this document 
(http://docs.openstack.org/developer/swift/overview_erasure_code.html) to 
deploy a simple cluster. I used Swift 2.3.0.

I am glad that operations like object PUT/GET/DELETE worked fine. I can see 
that objects were correctly encoded/uploaded and downloaded at proxy and object 
servers.

However, I noticed that swift-object-reconstructor seemed don't work as 
expected. Here is my setup: my cluster has three object servers, and I use this 
policy:

[storage-policy:1]
policy_type = erasure_coding
name = jerasure-rs-vand-2-1
ec_type = jerasure_rs_vand
ec_num_data_fragments = 2
ec_num_parity_fragments = 1
ec_object_segment_size = 1048576

After I uploaded one object, I verified that: there was one data fragment on 
each of two object servers, and one parity fragment on the third object server. 
However, when I deleted one data fragment, no matter how long I waited, it 
never got repaired, i.e., the deleted data fragment was never regenerated by 
the swift-object-reconstructor process.

My question: is swift-object-reconstructor supposed to be NOT WORKING given 
the current implementation status? Or, is there any configuration I missed in 
setting up swift-object-reconstructor?

Thanks

Changbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [keystone] token revocation woes

2015-07-22 Thread Adam Young

On 07/22/2015 05:39 PM, Adam Young wrote:

On 07/22/2015 03:41 PM, Morgan Fainberg wrote:
This is an indicator that the bottleneck is not the db strictly 
speaking, but also related to the way we match. This means we need to 
spend some serious cycles on improving both the stored record(s) for 
revocation events and the matching algorithm.


The simplest approach to revocation checking is to do a linear search 
through the events.  I think the old version of the code that did that 
is in a code review, and I will pull it out.


If we remove the tree, then the matching will have to run through each 
of the records and see if there is a match;  the test will be linear 
with the number of records (slightly shorter if a token is actually 
revoked).


This was the origianal, linear search version of the code.

https://review.openstack.org/#/c/55908/50/keystone/contrib/revoke/model.py,cm









Sent via mobile

On Jul 22, 2015, at 11:51, Matt Fischer m...@mattfischer.com 
mailto:m...@mattfischer.com wrote:



Dolph,

Per our IRC discussion, I was unable to see any performance 
improvement here although not calling DELETE so often will reduce 
the number of deadlocks when we're under heavy load especially given 
the globally replicated DB we use.




On Tue, Jul 21, 2015 at 5:26 PM, Dolph Mathews 
dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:


Well, you might be in luck! Morgan Fainberg actually implemented
an improvement that was apparently documented by Adam Young way
back in March:

https://bugs.launchpad.net/keystone/+bug/1287757

There's a link to the stable/kilo backport in comment #2 - I'd
be eager to hear how it performs for you!

On Tue, Jul 21, 2015 at 5:58 PM, Matt Fischer
m...@mattfischer.com mailto:m...@mattfischer.com wrote:

Dolph,

Excuse the delayed reply, was waiting for a brilliant
solution from someone. Without one, personally I'd prefer
the cronjob as it seems to be the type of thing cron was
designed for. That will be a painful change as people now
rely on this behavior so I don't know if its feasible. I
will be setting up monitoring for the revocation count and
alerting me if it crosses probably 500 or so. If the problem
gets worse then I think a custom no-op or sql driver is the
next step.

Thanks.


On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews
dolph.math...@gmail.com mailto:dolph.math...@gmail.com
wrote:



On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer
m...@mattfischer.com mailto:m...@mattfischer.com wrote:

I'm having some issues with keystone revocation
events. The bottom line is that due to the way
keystone handles the clean-up of these events[1],
having more than a few leads to:

 - bad performance, up to 2x slower token validation
with about 600 events based on my perf measurements.
 - database deadlocks, which cause API calls to
fail, more likely with more events it seems

I am seeing this behavior in code from trunk on June
11 using Fernet tokens, but the token backend does
not seem to make a difference.

Here's what happens to the db in terms of deadlock:
2015-07-15 21:25:41.082 31800 TRACE
keystone.common.wsgi DBDeadlock: (OperationalError)
(1213, 'Deadlock found when trying to get lock; try
restarting transaction') 'DELETE FROM
revocation_event WHERE revocation_event.revoked_at 
%s' (datetime.datetime(2015, 7, 15, 18, 55, 41, 55186),)

When this starts happening, I just go truncate the
table, but this is not ideal. If [1] is really true
then the design is not great, it sounds like
keystone is doing a revocation event clean-up on
every token validation call. Reading and
deleting/locking from my db cluster is not something
I want to do on every validate call.


Unfortunately, that's *exactly* what keystone is doing.
Adam and I had a conversation about this problem in
Vancouver which directly resulted in opening the bug
referenced on the operator list:

https://bugs.launchpad.net/keystone/+bug/1456797

Neither of us remembered the actual implemented
behavior, which is what you've run into and Deepti
verified in the bug's comments.


So, can I turn of token revocation for now? I didn't
see an obvious no-op driver.


Not sure how, other than writing your own no-op driver,
or perhaps an extended driver that doesn't try to clean
the table on 

Re: [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking to Neutron

2015-07-22 Thread Mohammad Banikazemi

Gal, This time conflicts with the Neutron ML2 weekly meeting time [1]. I
realize there are several networking related weekly meetings, but I would
like to see if we can find a different time. I suggest the same time, that
is 1600 UTC but either on Mondays or Thursdays.

Please note there is the Ironic/Neutron Integration meeting at the same
time on Mondays but that conflict may be a smaller conflict. Looking at the
meetings listed at [2] I do not see any other conflict.

Best,

Mohammad

[1] https://wiki.openstack.org/wiki/Meetings/ML2
[2] http://eavesdrop.openstack.org




From:   Gal Sagie gal.sa...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org, Eran Gampel
eran.gam...@toganetworks.com, Antoni Segura Puimedon
t...@midokura.com, Irena Berezovsky ir...@midokura.com
Date:   07/22/2015 12:31 PM
Subject:[openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking
to  Neutron




Hello Everyone,

Project Kuryr is now officially part of Neutron's big tent.
Kuryr is aimed to be used as a generic docker remote driver that connects
docker to Neutron API's
and provide containerised images for the common Neutron plugins.
We also plan on providing common additional networking services API's from
other sub projects
in the Neutron big tent.

We hope to get everyone on board with this project and leverage this joint
effort for the many different solutions out there (instead of everyone
re-inventing the wheel for each different project).

We want to start doing a weekly IRC meeting to coordinate the different
requierments and
tasks, so anyone that is interested to participate please share your time
preference
and we will try to find the best time for the majority.

Remember we have people in Europe, Tokyo and US, so we won't be able to
find time that fits
everyone.

The currently proposed time is  Wedensday at 16:00 UTC

Please reply with your suggested time/day,
Hope to see you all, we have an interesting and important project ahead of
us

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the Core Reviewer team for Kolla

2015-07-22 Thread Daneyon Hansen (danehans)
Steve,

Thanks for sharing the message with the Kolla community. I appreciate the 
opportunity to work on the project. It was great meeting several members of the 
community at the Vancouver DS and I look forward to meeting others at the mid 
cycle. Containers is a small world (for now) and I’m sure we’ll cross paths 
again. I'll continue using Kolla for Magnum development, so you’ll still see me 
from time-to-time. Best wishes to the Kolla community!

Regards,
Daneyon Hansen

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, July 22, 2015 at 1:47 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the 
Core Reviewer team for Kolla

Fellow Kolla developers,

Daneyon has been instrumental in getting Kolla rolling and keeping our project 
alive.  He even found me a new job that would pay my mortgage and Panamera 
payment so I could continue performing as PTL for Kolla and get Magnum off the 
ground.  But Daneyon has conferred with me that he has a personal objective of 
getting highly involved in the Magnum project and leading the container 
networking initiative coming out of Magnum.  For a sample of his new personal 
mission:

https://review.openstack.org/#/c/204686/

I’m a bit sad to lose Daneyon to Magnum, but life is short and not sweet 
enough.  I personally feel people should do what makes them satisfied and happy 
professionally.  Daneyon will still be present at the Kolla midcycle and 
contribute to our talk (if selected by the community) in Tokyo.  I expect 
Daneyon will make a big impact in Magnum, just as he has with Kolla.

In the future if Daneyon decides he wishes to re-engage with the Kolla project, 
we will welcome him with open arms because Daneyon rocks and does super high 
quality work.

NB Typically we would vote on removal of a core reviewer, unless they wish to 
be removed to focus on on other projects.  Since that is the case here, there 
is no vote necessary.

Please wish Daneyon well in his adventures in Magnum territory and prey he 
comes back when he finishes the job on Magnum networking :)

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Status of identity v3 support

2015-07-22 Thread Adam Young

On 07/15/2015 06:38 PM, melanie witt wrote:

Hi Everyone,

Recently I have started reviewing the patch series about nested quotas in nova [1] and 
I'm having trouble understanding where we currently are with identity v3 support in nova. 
From what I read in a semi recent proposal [2] I think things mostly just 
work if you configure to run with v3, but there are some gaps.

Nested quotas use the concept of parent/child projects in keystone v3 to allow 
parent projects to delegate quota management to subprojects. This means we'd 
start getting requests with a token scoped to the parent project to modify 
quota of a child project.


Don't think of it as a nested project, in this case...The project that 
is having its quota set is a resource inside the project that the user 
authenticated in as.



If Parent - child...you should not be able to scope to anything but the 
parent project in order to be able to set the quota for child.


But, you should not be able to do anything inside of child with a token 
scoped to parent.


The one place this gets tricky is that every project should be able to 
query its own quota.  But that is true now, is it not?





With keystone v3 we could get requests with tokens scoped to parent projects 
that act upon child project resources for all APIs in general.

The first patch in the series [3] removes the top-level validation check for 
context.project_id != project_id in URL, since with v3 it's a supported thing 
for a parent project to act on child project resources. (I don't think it's 
completely correct in that I think it would allow unrelated projects to act on 
one another)

Doing this fails the keypairs and security groups tempest tests [4] that verify 
that one project cannot create keypairs or security group rules in a different 
project.

Question: How can we handle project_id mismatch in a way that supports both keystone v2 
and v3? Do we augment the check to fall back on checking if is parent of 
using keystone API if there's a project_id mismatch?

Question: Do we intend to, for example, allow creation of keypairs by a parent 
on behalf of child being that the private key is returned to the caller?

Basically, I feel stuck on these reviews because it appears to me that nova 
doesn't fully support identity v3 yet. From what I checked, there aren't yet 
Tempest jobs running against identity v3 either.

Can anyone shed some light on this as I'm trying to see a way forward with the 
nested quotas reviews?

Thanks,
-melanie (irc: melwitt)


[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-quota-driver-api,n,z
[2] https://review.openstack.org/#/c/103617/
[3] https://review.openstack.org/182140/
[4] 
http://logs.openstack.org/40/182140/12/check/check-tempest-dsvm-full/8e51c94/logs/testr_results.html.gz



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL feature status

2015-07-22 Thread Mike Scherbakov
Sergii - I hope there will be no need to turn it off. Let's just enable it
asap and tests against it.

Thanks,

On Wed, Jul 22, 2015 at 8:28 AM Sergii Golovatiuk sgolovat...@mirantis.com
wrote:

 Sheena,

 We may turn off SSL right before the release. However, to test it better,
 I would turn it on. In this case every CI run will be helping to test it
 better.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Jul 22, 2015 at 5:51 AM, Sheena Gregson sgreg...@mirantis.com
 wrote:

 I believe the last time we discussed this, the majority of people were in
 favor of enabling SSL by default for all public endpoints, which would be
 my recommendation.



 As a reminder, this will mean that users will see a certificate warning
 the first time they access the Fuel UI.  We should document this as a known
 user experience and provide instructions for users to swap out the
 self-signed certificates that are enabled by default for their own internal
 CA certificates/3rd party certificates.



 *From:* Mike Scherbakov [mailto:mscherba...@mirantis.com]
 *Sent:* Wednesday, July 22, 2015 1:12 AM
 *To:* Stanislaw Bogatkin; Sheena Gregson
 *Cc:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [Fuel] SSL feature status



 Thanks Stas. My opinion is that it has to be enabled by default. I'd like
 product management to shine in here. Sheena?





 On Tue, Jul 21, 2015 at 11:06 PM Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi,



 we have a spec that says we disable SSL by default and it was
 successfully merged with that, no one was against such decision. So, if we
 want to enable it by default now - we can. It should be done as a part of
 our usual process, I think - I'll create a bug for it and fix it.



 Current status of feature is next:

 1. All codebase for SSL is merged

 2. Some tests for it writing my QA - not all of them are done yet.



 I'll update blueprints as soon as possible. Sorry for inconvenience.



 On Mon, Jul 20, 2015 at 8:44 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Hi guys,

 did we enable SSL for Fuel Master node and OpenStack REST API endpoints
 by default? If not, let's enable it by default. I don't know why we should
 not.



 Looks like we need to update blueprints as well [1], [2], as they don't
 seem to reflect current status of the feature.



 [1] https://blueprints.launchpad.net/fuel/+spec/ssl-endpoints

 [2] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints



 Stas, as you've been working on it, can you please provide current status?



 Thanks,



 --

 Mike Scherbakov
 #mihgen



 --

 Mike Scherbakov
 #mihgen

 __


 OpenStack Development Mailing List (not for usage questions)

 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the Core Reviewer team for Kolla

2015-07-22 Thread Steven Dake (stdake)
Fellow Kolla developers,

Daneyon has been instrumental in getting Kolla rolling and keeping our project 
alive.  He even found me a new job that would pay my mortgage and Panamera 
payment so I could continue performing as PTL for Kolla and get Magnum off the 
ground.  But Daneyon has conferred with me that he has a personal objective of 
getting highly involved in the Magnum project and leading the container 
networking initiative coming out of Magnum.  For a sample of his new personal 
mission:

https://review.openstack.org/#/c/204686/

I’m a bit sad to lose Daneyon to Magnum, but life is short and not sweet 
enough.  I personally feel people should do what makes them satisfied and happy 
professionally.  Daneyon will still be present at the Kolla midcycle and 
contribute to our talk (if selected by the community) in Tokyo.  I expect 
Daneyon will make a big impact in Magnum, just as he has with Kolla.

In the future if Daneyon decides he wishes to re-engage with the Kolla project, 
we will welcome him with open arms because Daneyon rocks and does super high 
quality work.

NB Typically we would vote on removal of a core reviewer, unless they wish to 
be removed to focus on on other projects.  Since that is the case here, there 
is no vote necessary.

Please wish Daneyon well in his adventures in Magnum territory and prey he 
comes back when he finishes the job on Magnum networking :)

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][barbican] Setting a live debug session time

2015-07-22 Thread Adrian Otto
Thank you, Ade for your help.

Regards,

Adrian Otto

 On Jul 22, 2015, at 10:55 AM, Ade Lee a...@redhat.com wrote:
 
 On Tue, 2015-07-21 at 23:42 -0400, Ade Lee wrote:
 So, as discussed on #irc, I plan to:
 
 1. Check with folks who are running in a devstack environment as to
 where/how their barbican.conf file is configured.
 
 2. Will keep you updated as to the progress of dogtag packaging in
 Ubuntu/Debian.  Currently, there are a couple of bugs due to changes in
 tomcat.  These should be resolved this week - and Dogtag will be back in
 sid.
 
 3.  Will send you script that is used to configure Barbican with Dogtag
 in the Barbican Dogtag gate.
 
 You can see the Dogtag barbican script in this CR ..
 https://review.openstack.org/#/c/202146/15/contrib/devstack/lib/barbican,cm
 
 Essentially, the steps are :
 -- install dogtag packages
 -- run pkispawn a couple of times to create the ca and kra
 -- copy some files to /etc/barbican
 -- modify barbican.conf
 
 Also, the bug in the snakeoil plugin has been fixed -- 
 https://review.openstack.org/#/c/204704/
 
 so no need to patch it.
 
 Ade
 
 Ade
 
 On Tue, 2015-07-21 at 09:05 +0900, Madhuri wrote:
 Hi Alee,
 
 Thank you for showing up for help.
 
 The proposed timing suits me. It would be 10:30 am JST for me.
 
 I am madhuri on #freenode.
 Will we be discussing on #openstack-containers?
 
 Sdake,
 Thank you for setting up this.
 
 Regards,
 Madhuri
 
 
 On Mon, Jul 20, 2015 at 11:26 PM, Ade Lee a...@redhat.com wrote:
Madhuri,
 
I understand that you are somewhere in APAC.  Perhaps it would
be best
to set up a debugging session on Tuesday night  -- at 9:30 pm
EST
 
This would correspond to 01:30:00 a.m. GMT (Wednesday), which
should
correspond to sometime in the morning for you.
 
We can start with the initial goal of getting the snake oil
plugin
working for you, and then see where things are going wrong in
the Dogtag
install.
 
Will that work for you?  What is your IRC nick?
Ade
 
(ps. I am alee on #freenode and can be found on either
#openstack-barbican or #dogtag-pki)
 
 01:30:00 a.m. Tuesday July 21, 2015 in GMT
On Fri, 2015-07-17 at 14:39 +, Steven Dake (stdake) wrote:
 Madhuri,
 
 
 Alee is in EST timezone (gmt-5 IIRC).  Alee will help you
get barbican
 rolling.  Can you two folks set up a time to chat on irc on
Monday or
 tuesday?
 
 
 Thanks
 -steve
 
 
 
 
 
 
 

 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-22 Thread Aleksey Kasatkin
Team,

I would like to request an exception from the Feature Freeze for Templates
for Networking feature [1].

Exception is required for two CRs to python-fuelclient: [2],[3] and one CR
to fuel-web (Nailgun): [4].
These CRs are for adding ability to create/remove networks via API [4] and
for supporting new API functionality via CLI.
These patchsets are for adding new templates-related functionality and they
do not change existing functionality.
Patchsets [3],[4] are in deep review and they will hopefully be merged on
Thursday.

Please, respond if you have any questions or concerns related to this
request.

Thanks in advance.

[1] https://blueprints.launchpad.net/fuel/+spec/templates-for-networking
[2] https://review.openstack.org/#/c/204321/
[3] https://review.openstack.org/#/c/203602/
[4] https://review.openstack.org/#/c/201217/

--
Best regards,
Aleksey Kasatkin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-07-22 Thread James Slagle
On Tue, Jul 21, 2015 at 3:29 PM, Derek Higgins der...@redhat.com wrote:
 Hi All,
Something we discussed at the summit was to switch the focus of tripleo's
 deployment method to deploy using instack using images built with
 tripleo-puppet-elements. Up to now all the instack work has been done
 downstream of tripleo as part of rdo. Having parts of our deployment story
 outside of upstream gives us problems mainly because it becomes very
 difficult to CI what we expect deployers to use while we're developing the
 upstream parts.

 Essentially what I'm talking about here is pulling instack-undercloud
 upstream along with a few of its dependency projects (instack,
 tripleo-common, tuskar-ui-extras etc..) into tripleo and using them in our
 CI in place of devtest.

 Getting our CI working with instack is close to working but has taken longer
 then I expected because of various complications and distractions but I hope
 to have something over the next few days that we can use to replace devtest
 in CI, in a lot of ways this will start out by taking a step backwards but
 we should finish up in a better place where we will be developing (and
 running CI on) what we expect deployers to use.

 Once I have something that works I think it makes sense to drop the jobs
 undercloud-precise-nonha and overcloud-precise-nonha, while switching
 overcloud-f21-nonha to use instack, this has a few effects that need to be
 called out

 1. We will no longer be running CI on (and as a result not supporting) most
 of the the bash based elements
 2. We will no longer be running CI on (and as a result not supporting)
 ubuntu

 Should anybody come along in the future interested in either of these things
 (and prepared to put the time in) we can pick them back up again. In fact
 the move to puppet element based images should mean we can more easily add
 in extra distros in the future.

 3. While we find our feet we should remove all tripleo-ci jobs from non
 tripleo projects, once we're confident with it we can explore adding our
 jobs back into other projects again

 Nothing has changed yet, I order to check we're all on the same page this is
 high level details of how I see things should proceed so shout now if I got
 anything wrong or you disagree.


+1 on the plan from me.

Note that instack-undercloud is still using a few of the bash based
elements. It's mostly due to no one having a chance yet to finish the
full conversion over to the puppet driven approach, and some
additional puppet module work that's likely needed. I think it'll be
easier to clean these up and finish the conversion, so we can
deprecate the old elements once we get it covered via tripleo CI.

 Sorry for not sending this out sooner for those of you who weren't at the
 summit,
 Derek.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the Core Reviewer team for Kolla

2015-07-22 Thread Sam Yaple
Daneyon,

We haven't had much overlap here in Kolla, but our interactions have always
been pleasant and informative. You are clearly a very smart and driven guy.

Good luck with Magnum. Hopefully you will see me around Magnum more in the
future as well, I expect great things!

Sam Yaple

On Wed, Jul 22, 2015 at 3:47 PM, Steven Dake (stdake) std...@cisco.com
wrote:

  Fellow Kolla developers,

  Daneyon has been instrumental in getting Kolla rolling and keeping our
 project alive.  He even found me a new job that would pay my mortgage and
 Panamera payment so I could continue performing as PTL for Kolla and get
 Magnum off the ground.  But Daneyon has conferred with me that he has a
 personal objective of getting highly involved in the Magnum project and
 leading the container networking initiative coming out of Magnum.  For a
 sample of his new personal mission:

  https://review.openstack.org/#/c/204686/

  I’m a bit sad to lose Daneyon to Magnum, but life is short and not sweet
 enough.  I personally feel people should do what makes them satisfied and
 happy professionally.  Daneyon will still be present at the Kolla midcycle
 and contribute to our talk (if selected by the community) in Tokyo.  I
 expect Daneyon will make a big impact in Magnum, just as he has with Kolla.

  In the future if Daneyon decides he wishes to re-engage with the Kolla
 project, we will welcome him with open arms because Daneyon rocks and does
 super high quality work.

  NB Typically we would vote on removal of a core reviewer, unless they
 wish to be removed to focus on on other projects.  Since that is the case
 here, there is no vote necessary.

  Please wish Daneyon well in his adventures in Magnum territory and prey
 he comes back when he finishes the job on Magnum networking :)

  Regards
 -steve



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr][Kolla] - Bringing Dockers networking to Neutron

2015-07-22 Thread Sam Yaple
Thanks for bringing this to my attention Kevin! I may not have see this if
my filter didn't catch 'Kolla'.

I am very interested in this. In Kolla and related projects we containerize
all the neutron components near the beginning of the year. We too have no
wish to getting a situation where we are re-inventing the wheel for each
different project.

Unfortunately, I am not seeing much documentation about this project. I
would love to read more about the current status and plans so we can both
contribute to each other!

Sam Yaple

On Wed, Jul 22, 2015 at 2:04 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  Awesome. :)

 Dockerization of Neutron plugins is already in scope of the Kolla project.
 Might want to coordinate the effort with the Kolla team.

 Thanks,
 Kevin

 --
 *From:* Gal Sagie
 *Sent:* Wednesday, July 22, 2015 9:28:50 AM
 *To:* OpenStack Development Mailing List (not for usage questions); Eran
 Gampel; Antoni Segura Puimedon; Irena Berezovsky
 *Subject:* [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking
 to Neutron


  Hello Everyone,

 Project Kuryr is now officially part of Neutron's big tent.
 Kuryr is aimed to be used as a generic docker remote driver that connects
 docker to Neutron API's
 and provide containerised images for the common Neutron plugins.
 We also plan on providing common additional networking services API's from
 other sub projects
 in the Neutron big tent.

  We hope to get everyone on board with this project and leverage this
 joint effort for the many different solutions out there (instead of
 everyone re-inventing the wheel for each different project).

  We want to start doing a weekly IRC meeting to coordinate the different
 requierments and
 tasks, so anyone that is interested to participate please share your time
 preference
 and we will try to find the best time for the majority.

  Remember we have people in Europe, Tokyo and US, so we won't be able to
 find time that fits
 everyone.

  The currently proposed time is  *Wedensday at 16:00 UTC *

  Please reply with your suggested time/day,
 Hope to see you all, we have an interesting and important project ahead of
 us

  Thanks
 Gal.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread Assaf Muller


- Original Message -
 
 
 The issue with the availability zone solution is that we now force
 availability zones in Nova to be constrained to network configuration. In
 the L3 ToR/no overlay configuration, this means every rack is its own
 availability zone. This is pretty annoying for users to deal with because
 they have to choose from potentially hundreds of availability zones and it
 rules out making AZs based on other things (e.g. current phase, cooling
 systems, etc).
 
 I may be misunderstanding and you could be suggesting to not expose this
 availability zone to the end user and only make it available to the
 scheduler. However, this defeats one of the purposes of availability zones
 which is to let users select different AZs to spread their instances across
 failure domains.

No, you understood me correctly. You're right that it is problematic tying
AZs to network availability. We should introduce a new Neutron API then to
expose physnet mappings: For a given network, spit out all of the hosts
that can reach that network (Internally the Neutron server persists the physnet 
mappings
we get from agent reports). That API call will serve as a Nova filter in
case a network/port_id was requested when booting a VM. If a network/port_id
was not satisfied, another API call will be used for the inverse: Return a list
of possible networks for a given host, or the mappings between all hosts
and their networks reachability. So, for example:
neutron list-host-networks (Mappings between hosts and their networks)
neutron show-hosts-networks (Mapping between a host and its networks)
neutron show-network-hosts (Mapping between a network and what hosts can reach 
it).
neutron list-networks-hosts (Mapping between networks and their hosts).

Everything else I wrote remains the same.


 On Jul 22, 2015 2:41 PM, Assaf Muller  amul...@redhat.com  wrote:
 
 
 I added a summary of my thoughts about the enhancements I think we could
 make to the Nova scheduler in order to better support the Neutron provider
 networks use case.
 
 - Original Message -
  On Tue, Jul 21, 2015 at 1:11 PM, John Belamaric  jbelama...@infoblox.com 
  wrote:
   Wow, a lot to digest in these threads. If I can summarize my
   understanding
   of the two proposals. Let me know whether I get this right. There are a
   couple problems that need to be solved:
   
   a. Scheduling based on host reachability to the segments
   b. Floating IP functionality across the segments. I am not sure I am
   clear
   on this one but it sounds like you want the routers attached to the
   segments
   to advertise routes to the specific floating IPs. Presumably then they
   would
   do NAT or the instance would assign both the fixed IP and the floating IP
   to
   its interface?
   
   In Proposal 1, (a) is solved by associating segments to the front network
   via a router - that association is used to provide a single hook into the
   existing API that limits the scope of segment selection to those
   associated
   with the front network. (b) is solved by tying the floating IP ranges to
   the
   same front network and managing the reachability with dynamic routing.
   
   In Proposal 2, (a) is solved by tagging each network with some meta-data
   that the IPAM system uses to make a selection. This implies an IP
   allocation
   request that passes something other than a network/port to the IPAM
   subsystem. This fine from the IPAM point of view but there is no
   corresponding API for this right now. To solve (b) either the IPAM system
   has to publish the routes or the higher level management has to ALSO be
   aware of the mappings (rather than just IPAM).
  
  John, from your summary above, you seem to have the best understanding
  of the whole of what I was weakly attempting to communicate. Thank
  you for summarizing.
  
   To throw some fuel on the fire, I would argue also that (a) is not
   sufficient and address availability needs to be considered as well (as
   described in [1]). Selecting a host based on reachability alone will fail
   when addresses are exhausted. Similarly, with (b) I think there needs to
   be
   consideration during association of a floating IP to the effect on
   routing.
   That is, rather than a huge number of host routes it would be ideal to
   allocate the floating IPs in blocks that can be associated with the
   backing
   networks (though we would want to be able to split these blocks as small
   as
   a /32 if necessary - but avoid it/optimize as much as possible).
  
  Yes, address availability is a factor and must be considered in either
  case. My email was getting long already and I thought that could be
  considered separately since I believe it applies regardless of the
  outcome of this thread. But, since it seems to be an essential part
  of this conversation, let me say something about it.
  
  Ultimately, we need to match up the host scheduled by Nova to the
  addresses available to that host. We 

[openstack-dev] [Tricircle]Weekly Team Meeting 2015.07.22 Roundup

2015-07-22 Thread Zhipeng Huang
Hi Team,

Thanks again for attending the meeting yesterday. We have quite a lot AIs
left (3), and hopefully we could got most of them done by the time of next
meeting.

The meetbot minutes is here:
http://eavesdrop.openstack.org/meetings/tricircle/2015/tricircle.2015-07-22-13.05.html,
and as usual please find in the attachment for a noise canceled chatlog for
anything that wasn't caught by the meetbot.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard  Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado


Tricircle Team Meeting 2015.07.22.docx
Description: MS-Word 2007 document
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-07-22 Thread Sean Dague
On 07/15/2015 01:30 PM, Auld, Will wrote:
 Sean,
 
 OK, moving thread to openstack-dev. 
 
 We'd like to help with this work if there is more to do. What are the next 
 steps and what areas need help?
 
 Thanks,
 
 Will

Thanks. Here is the current status:

Grenade has been updated to have a post-stack.sh stage, which is the
ability to call an arbitrary other script after it brings up the main
stack.sh. This provides the hook for devstack-gate to ask grenade to do
additional things. This is landed here -
https://github.com/openstack-dev/grenade/commit/eafab2b653a5133f4835b9709df0854a4de77791

There is a gate-grenade-dsvm-multinode up and running as experimental on
devstack-gate and grenade -
https://github.com/openstack-infra/project-config/blob/a987f4d81466f008e106f6f67e0d072dc0c525e2/jenkins/jobs/devstack-gate.yaml#L1896-L1926

There is a devstack-gate patch up for review -
https://review.openstack.org/#/c/199091/. This now works for nova net
and partial update by upgrading everything on the main node, and not
touching the subnode after initial post-stack.sh.

Next Steps

* the devstack-gate patch needs real review, it hasn't gotten much of
that yet.
* move this out of experimental to check on devstack-gate
* extend this approach with neutron grenade multinode jobs

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-22 Thread Carl Baldwin
On Tue, Jul 21, 2015 at 1:11 PM, John Belamaric jbelama...@infoblox.com wrote:
 Wow, a lot to digest in these threads. If I can summarize my understanding
 of the two proposals. Let me know whether I get this right. There are a
 couple problems that need to be solved:

  a. Scheduling based on host reachability to the segments
  b. Floating IP functionality across the segments. I am not sure I am clear
 on this one but it sounds like you want the routers attached to the segments
 to advertise routes to the specific floating IPs. Presumably then they would
 do NAT or the instance would assign both the fixed IP and the floating IP to
 its interface?

 In Proposal 1, (a) is solved by associating segments to the front network
 via a router - that association is used to provide a single hook into the
 existing API that limits the scope of segment selection to those associated
 with the front network. (b) is solved by tying the floating IP ranges to the
 same front network and managing the reachability with dynamic routing.

 In Proposal 2, (a) is solved by tagging each network with some meta-data
 that the IPAM system uses to make a selection. This implies an IP allocation
 request that passes something other than a network/port to the IPAM
 subsystem. This fine from the IPAM point of view but there is no
 corresponding API for this right now. To solve (b) either the IPAM system
 has to publish the routes or the higher level management has to ALSO be
 aware of the mappings (rather than just IPAM).

John, from your summary above, you seem to have the best understanding
of the whole of what I was weakly attempting to communicate.  Thank
you for summarizing.

 To throw some fuel on the fire, I would argue also that (a) is not
 sufficient and address availability needs to be considered as well (as
 described in [1]). Selecting a host based on reachability alone will fail
 when addresses are exhausted. Similarly, with (b) I think there needs to be
 consideration during association of a floating IP to the effect on routing.
 That is, rather than a huge number of host routes it would be ideal to
 allocate the floating IPs in blocks that can be associated with the backing
 networks (though we would want to be able to split these blocks as small as
 a /32 if necessary - but avoid it/optimize as much as possible).

Yes, address availability is a factor and must be considered in either
case.  My email was getting long already and I thought that could be
considered separately since I believe it applies regardless of the
outcome of this thread.  But, since it seems to be an essential part
of this conversation, let me say something about it.

Ultimately, we need to match up the host scheduled by Nova to the
addresses available to that host.  We could do this by delaying
address assignment until after host binding or we could do it by
including segment information from Neutron during scheduling.  The
latter has the advantage that we can consider IP availability during
scheduling.  That is why GoDaddy implemented it that way.

 In fact, I think that these proposals are more or less the same - it's just
 in #1 the meta-data used to tie the backing networks together is another
 network. This allows it to fit in neatly with the existing APIs. You would
 still need to implement something prior to IPAM or within IPAM that would
 select the appropriate backing network.

They are similar but to say they're the same is going a bit too far.
If they were the same then we'd be done with this conversation.  ;)

 As a (gulp) third alternative, we should consider that the front network
 here is in essence a layer 3 domain, and we have modeled layer 3 domains as
 address scopes in Liberty. The user is essentially saying give me an
 address that is routable in this scope - they don't care which actual
 subnet it gets allocated on. This is conceptually more in-line with [2] -
 modeling L3 domain separately from the existing Neutron concept of a network
 being a broadcast domain.

I will consider this some more.  This is an interesting thought.
Address scopes and subnet pools could play a role here.  I don't yet
see how it can all fit together but it is worth some thought.

One nit:  the neutron network might have been conceived as being just
a broadcast domain but, in practice, it is L2 and L3.  The Neutron
subnet is not really an L3 construct; it is just a cidr and doesn't
make sense on its own without considering its association with a
network and the other subnets associated with the same network.

 Fundamentally, however we associate the segments together, this comes down
 to a scheduling problem. Nova needs to be able to incorporate data from
 Neutron in its scheduling decision. Rather than solving this with a single
 piece of meta-data like network_id as described in proposal 1, it probably
 makes more sense to build out the general concept of utilizing network data
 for nova scheduling. We could still model this as in #1, or using 

[openstack-dev] [Neutron]Add dns and dhcp log into DHCP agent

2015-07-22 Thread Zhi Chang
hi, all
I have a patch which is add dns and dhcp log into DHCP agent. Patch: 
https://review.openstack.org/#/c/202855. My point is put this log into dnsmasq 
process (/opt/stack/data/neutron/dhcp/network-id/dhcp_dns_log) folder. How 
does the patch? Please reivew it


thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-tc][deb-packaging] Updated proposal for OpenStack deb packages

2015-07-22 Thread Igor Yozhikov
Hello again to everyone, resending this letter due to typo in the topic of
the previous letter, apologize for this.

*Introductory words:*
 I want to present renewed proposal for packaging of OpenStack components
for deb based Linux distributions.
In case of stackforge retirement, I believe that new repositories for deb
specs should appears under the */openstack/* name-space instead of
*/stackforge/*.
This and further steps was also advised by dhellmann, anteaya and angdraug
at #openstack-meeting irc channel during/after TechnicalCommittee meeting
http://paste.openstack.org/show/399927/ yesterday 21 of Jul 2015.
Also it would be great to discuss all of this during next TC meeting July
28th.
Link on previous letter
http://lists.openstack.org/pipermail/openstack-dev/2015-July/069377.html

*Renewed proposal:*
 During the Liberty summit in Vancouver, the idea to maintain OpenStack
packages for Debian and Ubuntu on upstream infra sparked. As part of the
OpenStack and Debian community we still want to push for it. In case of
stackforge retirement it will be awesome to be able to work directly under
the /*openstack*/ name-space. Mostly, it is server packages for Debian
which we want to see maintained there. All of the dependencies (including
Oslo libraries and python-*client) will continue to be maintained using
git.debian.org, as a shared effort between Debian and Ubuntu.

Purpose:

   -

   One centralized place at github.com*/openstack/* to maintain package
   build manifests/specs for main OpenStack projects.
   -

   /*openstack*/*  already has gerrit where could be attached additional
   gates for building and testing OpenStack packages for deb based Linux
   distributions.
   -

   All changes related to the ways of how one or another OpenStack project
   should be installed would be represented or proposed to one place. So not
   only famous package maintainers can work on build manifests for packages,
   but also the entire community interested in packaging of OpenStack
   projects. Also this place could become as main base for packages for all
   deb based Linux distributions like Debian and Ubuntu.
   -

   Package build manifests could be reviewed or adjusted not only by
   package maintainers, but also by python developers, that are writing
   OpenStack code and that's could be very valuable due to possible changes in
   configuration, paths and so on.


Projects list:
Here I propose list of OpenStack projects, it consists of about 25 of
projects which of course would be changed with time due to possible birth
of new projects.

Project name

Possible github.com/openstack repository

ceilometer

http://github.com/openstack/deb-ceilometer

ceilometermiddleware

http://github.com/openstack/deb-ceilometermiddleware

cinder

http://github.com/openstack/deb-cinder

glance

http://github.com/openstack/deb-glance

glance_store

http://github.com/openstack/deb-glance_store

heat

http://github.com/openstack/deb-heat

horizon

http://github.com/openstack/deb-horizon

ironic

http://github.com/openstack/deb-ironic

keystone

http://github.com/openstack/deb-keystone

keystonemiddleware

http://github.com/openstack/deb-keystonemiddleware

mistral

http://github.com/openstack/deb-mistral

mistral-dashboard

http://github.com/openstack/deb-mistral-dashboard

murano

http://github.com/openstack/deb-murano

murano-dashboard

http://github.com/openstack/deb-murano-dashboard

neutron

http://github.com/openstack/deb-neutron

neutron-fwaas

http://github.com/openstack/deb-neutron-fwaas

neutron-lbaas

http://github.com/openstack/deb-neutron-lbaas

neutron-vpnaas

http://github.com/openstack/deb-neutron-vpnaas

nova

http://github.com/openstack/deb-nova

rally

http://github.com/openstack/deb-rally

sahara

http://github.com/openstack/deb-sahara

sahara-dashboard

http://github.com/openstack/deb-sahara-dashboard

swift

http://github.com/openstack/deb-swift

tempest

http://github.com/openstack/deb-tempest

trove

http://github.com/openstack/deb-trove


Thanks,
Igor Yozhikov
Senior Deployment Engineer
at Mirantis http://www.mirantis.com
skype: igor.yozhikov
cellular: +7 901 5331200
slack: iyozhikov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Doodle poll for New Alternating Week QA Meeting time

2015-07-22 Thread Matthew Treinish
On Tue, Jul 07, 2015 at 11:06:41AM -0400, Matthew Treinish wrote:
 
 Hi everyone,
 
 As a follow-up from the QA meeting from 2 weeks ago I'm starting a poll to
 reschedule the 22:00 UTC QA meeting:
 
 http://doodle.com/eykyptgi3ca3r9mk
 
 I used the yaml2ical repo (which is awesome and makes this so much easier)
 to figure out which slots were open for Thursday.
 
 To summarize the issue with the current time slot is that it's too early for
 most of APAC, and too late for parts of the Americas and Europe. This has led 
 to
 the current 22:00 UTC meeting slot be less productive because of lower overall
 attendance. The original goal of the 22:00 UTC was to have a more friendly 
 APAC
 time slot because the other 17:00 UTC time is more Europe and Americas 
 friendly.
 
 I'll keep the poll open for about a week or 2, or until there is a clear 
 winning
 time slot. We'll hopefully start using the new slot for the July 23rd meeting.
 (so the poll will definitely be closed by the 22nd)
 

Just a follow up I closed the poll for this. The timeslot that had the most
votes was 9:00 UTC.[1] I've pushed the irc-meetings patch to adjust the time for
the meeting:

https://review.openstack.org/#/c/204628/

I'll also be updating the meeting wiki page with the new time
shortly.

Thanks,

Matt Treinish

[1] http://doodle.com/eykyptgi3ca3r9mk


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc][deb-packaging] Updated proposal for OpenStack deb packages

2015-07-22 Thread Hayes, Graham
Hi, I replied previously, but it looks like I messed up the formatting
:).

Designate is missing from this list - we are currently packaged by both
Ubuntu and Debian.

We have 2 repos that would need packaging -

openstack/designate
openstack/designate-dashboard

Thanks,

Graham



On 22/07/15 15:12, Igor Yozhikov wrote:
 Hello again to everyone, resending this letter due to typo in the topic
 of the previous letter, apologize for this.
 *
 Introductory words:*
  I want to present renewed proposal for packaging of OpenStack
 components for deb based Linux distributions.
 In case of stackforge retirement, I believe that new repositories for
 deb specs should appears under the //openstack// name-space instead of
 //stackforge//.
 This and further steps was also advised by dhellmann, anteaya and
 angdraug at #openstack-meeting irc channel during/after
 TechnicalCommittee meeting http://paste.openstack.org/show/399927/
 yesterday 21 of Jul 2015.
 Also it would be great to discuss all of this during next TC meeting
 July 28th.
 Link on previous letter
 http://lists.openstack.org/pipermail/openstack-dev/2015-July/069377.html
 
 *Renewed proposal:*
  During the Liberty summit in Vancouver, the idea to maintain OpenStack
 packages for Debian and Ubuntu on upstream infra sparked. As part of the
 OpenStack and Debian community we still want to push for it. In case of
 stackforge retirement it will be awesome to be able to work directly
 under the //openstack// name-space. Mostly, it is server packages for
 Debian which we want to see maintained there. All of the dependencies
 (including Oslo libraries and python-*client) will continue to be
 maintained using git.debian.org http://git.debian.org, as a shared
 effort between Debian and Ubuntu.
 
 Purpose:
 
   *
 
 One centralized place at github.com http://github.com//openstack//
 to maintain package build manifests/specs for main OpenStack projects.
 
   *
 
 //openstack//*  already has gerrit where could be attached
 additional gates for building and testing OpenStack packages for deb
 based Linux distributions.
 
   *
 
 All changes related to the ways of how one or another OpenStack
 project should be installed would be represented or proposed to one
 place. So not only famous package maintainers can work on build
 manifests for packages, but also the entire community interested in
 packaging of OpenStack projects. Also this place could become as
 main base for packages for all deb based Linux distributions like
 Debian and Ubuntu.
 
   *
 
 Package build manifests could be reviewed or adjusted not only by
 package maintainers, but also by python developers, that are writing
 OpenStack code and that's could be very valuable due to possible
 changes in configuration, paths and so on.
 
 
 Projects list:
 
 Here I propose list of OpenStack projects, it consists of about 25 of
 projects which of course would be changed with time due to possible
 birth of new projects.
 
 Project name
 
 Possible github.com/openstack http://github.com/openstack repository
 
 ceilometer
 
 http://github.com/openstack/deb-ceilometer
 
 ceilometermiddleware
 
 http://github.com/openstack/deb-ceilometermiddleware
 
 cinder
 
 http://github.com/openstack/deb-cinder
 
 glance
 
 http://github.com/openstack/deb-glance
 
 glance_store
 
 http://github.com/openstack/deb-glance_store
 
 heat
 
 http://github.com/openstack/deb-heat
 
 horizon
 
 http://github.com/openstack/deb-horizon
 
 ironic
 
 http://github.com/openstack/deb-ironic
 
 keystone
 
 http://github.com/openstack/deb-keystone
 
 keystonemiddleware
 
 http://github.com/openstack/deb-keystonemiddleware
 
 mistral
 
 http://github.com/openstack/deb-mistral
 
 mistral-dashboard
 
 http://github.com/openstack/deb-mistral-dashboard
 
 murano
 
 http://github.com/openstack/deb-murano
 
 murano-dashboard
 
 http://github.com/openstack/deb-murano-dashboard
 
 neutron
 
 http://github.com/openstack/deb-neutron
 
 neutron-fwaas
 
 http://github.com/openstack/deb-neutron-fwaas
 
 neutron-lbaas
 
 http://github.com/openstack/deb-neutron-lbaas
 
 neutron-vpnaas
 
 http://github.com/openstack/deb-neutron-vpnaas
 
 nova
 
 http://github.com/openstack/deb-nova
 
 rally
 
 http://github.com/openstack/deb-rally
 
 sahara
 
 http://github.com/openstack/deb-sahara
 
 sahara-dashboard
 
 http://github.com/openstack/deb-sahara-dashboard
 
 swift
 
 http://github.com/openstack/deb-swift
 
 tempest
 
 http://github.com/openstack/deb-tempest
 
 trove
 
 http://github.com/openstack/deb-trove
 
 
 
 Thanks,
 Igor Yozhikov
 Senior Deployment Engineer
 at Mirantis http://www.mirantis.com
 skype: igor.yozhikov
 cellular: +7 901 5331200
 slack: iyozhikov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all] CI Infrastructure down

2015-07-22 Thread Joshua Hesketh
Hi all,

The gate is slowly recovering and working through the backlog of changes.

It should now be safe to do 'recheck's on jobs that received NOT_REGISTERED
errors.

Cheers,
Josh

On Wed, Jul 22, 2015 at 11:08 PM, Andreas Jaeger a...@suse.com wrote:

 This morning we hit some problems with our CI infrastructure, the infra
 team is still investigating and trying to fix it.

 You can upload changes and comment on them but many jobs will fail with
 NOT_REGISTERED.

 Do not recheck these until the infrastructure is fixed again.

 Also, there's no sense in approving changes currently, they might be hit
 by the same NOT_REGISTERED jobs or fail in the post jobs.

 So, happy coding, reviewing - and local testing for now,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
HRB 21284 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Hiera nodes issue for 6.1 plugins

2015-07-22 Thread Swann Croiset
On Wed, Jul 22, 2015 at 1:46 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 Hi,

 I think we should just fix the bug to make nodes.yaml match with the data
 in astute.yaml. Because 'nodes' Hiera key is also used for /etc/hosts
 update. I've raised bug priority to high.

 +1

 Regards,
 Alex

 On Wed, Jul 22, 2015 at 2:42 PM, Irina Povolotskaya 
 ipovolotsk...@mirantis.com wrote:

 Hi to all,

 Swann Croiset reported a bug on Hiera nodes [1].

 for history, Samuel  Bartel encountered this bug, I have diagnosed :)

This issue affects several plugins so far.

 In 6.1, there is no workaround.
 In 7.0,  there should be a new structure for networks_metadata;
 this means, there will be ready-to-go puppet functions to get the data.

 Unfortunately, it impacts UX greatly.

 Thanks.


 [1] https://bugs.launchpad.net/fuel/+bug/1476957

 --
 Best regards,

 Irina

 *Business Analyst*

 *Partner Enablement*

 *skype: ira_live*







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] NoFreeConductorWorker going away with move to Futurist?

2015-07-22 Thread Dmitry Tantsur

Hi all!

Currently _spawn_worker in the conductor manager raises 
NoFreeConductorWorker if pool is already full. That's not very user 
friendly (potential source of retries in client) and does not map well 
on common async worker patterns.


My understanding is that it was done to prevent the conductor thread 
from waiting on pool to become free. If this is true, we no longer need 
it after switch to Futurist, as Futurist maintains internal queue for 
its green executor, just like thread and process executors in stdlib do. 
Instead of blocking the conductor the request will be queued, and a user 
won't have to retry vague (and rare!) HTTP 503 error.


WDYT about me dropping this exception with move to Futurist?

Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL feature status

2015-07-22 Thread Sheena Gregson
I believe the last time we discussed this, the majority of people were in
favor of enabling SSL by default for all public endpoints, which would be
my recommendation.



As a reminder, this will mean that users will see a certificate warning the
first time they access the Fuel UI.  We should document this as a known
user experience and provide instructions for users to swap out the
self-signed certificates that are enabled by default for their own internal
CA certificates/3rd party certificates.



*From:* Mike Scherbakov [mailto:mscherba...@mirantis.com]
*Sent:* Wednesday, July 22, 2015 1:12 AM
*To:* Stanislaw Bogatkin; Sheena Gregson
*Cc:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [Fuel] SSL feature status



Thanks Stas. My opinion is that it has to be enabled by default. I'd like
product management to shine in here. Sheena?





On Tue, Jul 21, 2015 at 11:06 PM Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

Hi,



we have a spec that says we disable SSL by default and it was successfully
merged with that, no one was against such decision. So, if we want to
enable it by default now - we can. It should be done as a part of our usual
process, I think - I'll create a bug for it and fix it.



Current status of feature is next:

1. All codebase for SSL is merged

2. Some tests for it writing my QA - not all of them are done yet.



I'll update blueprints as soon as possible. Sorry for inconvenience.



On Mon, Jul 20, 2015 at 8:44 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

Hi guys,

did we enable SSL for Fuel Master node and OpenStack REST API endpoints by
default? If not, let's enable it by default. I don't know why we should not.



Looks like we need to update blueprints as well [1], [2], as they don't
seem to reflect current status of the feature.



[1] https://blueprints.launchpad.net/fuel/+spec/ssl-endpoints

[2] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints



Stas, as you've been working on it, can you please provide current status?



Thanks,



-- 

Mike Scherbakov
#mihgen



-- 

Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev