Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-08 Thread Sean Dague
On 01/08/2015 06:29 PM, Morgan Fainberg wrote:
 As of Juno all projects are using the new keystonemiddleware package for 
 auth_token middleware. Recently we’ve been running into issues with 
 maintenance of the now frozen (and deprecated) 
 keystoneclient.middleware.auth_token code. Ideally all deployments should 
 move over to the new package. In some cases this may or may not be as 
 feasible due to requirement changes when using the new middleware package on 
 particularly old deployments (Grizzly, Havana, etc).
 
 The Keystone team is looking for the best way to support our deployer 
 community. In a perfect world we would be able to convert icehouse 
 deployments to the new middleware package and instruct deployers to use 
 either an older keystoneclient or convert to keystonemiddleware if they want 
 the newest keystoneclient lib (regardless of their deployment release). For 
 releases older than Icehouse (EOLd) there is no way to communicate in the 
 repositories/tags a change to require keystonemiddleware.
 
 There are 2 viable options to get to where we only have one version of the 
 keystonemiddleware to maintain (which for a number of reasons, primarily 
 relating to security concerns is important).
 
 1) Work to update Icehouse to include the keystonemiddleware package for the 
 next stable release. Sometime after this stable release remove the auth_token 
 (and other middlewares) from keystoneclient. The biggest downside is this 
 adds new dependencies in an old release, which is poor for packaging and 
 deployers (making sure paste-ini is updated etc).
 
 2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. This 
 is a better experience for our deployer base, but does not solve the issues 
 around solid testing with the auth_token middleware from keystoneclient 
 (except for the stable-icehouse devstack-gate jobs).
 
 I am looking for insight, preferences, and other options from the community 
 and the TC. I will propose this topic for the next TC meeting so that we can 
 have a clear view on how to handle this in the most appropriate way that 
 imparts the best balance between maintainability, security, and experience 
 for the OpenStack providers, deployers, and users.

So, ignoring the code a bit for a second, what are the interfaces which
are exposed that we're going to run into a breaking change here?

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-08 Thread Tom Fifield
On 09/01/15 08:06, Maru Newby wrote:
 
 On Jan 8, 2015, at 3:54 PM, Sean Dague s...@dague.net wrote:

 On 01/08/2015 06:41 PM, Maru Newby wrote:
 As per a recent exchange on #openstack-neutron, I’ve been asked to present 
 my views on this effort.  What follows is in no way intended to detract 
 from the hard work and dedication of those undertaking it, but I think that 
 their energy could be better spent.

 At nova’s juno mid-cycle in July, there was a discussion about deprecating 
 nova-network.  Most of the work-items on the TC’s gap analysis [1] had been 
 covered off, with one notable exception: Gap 6, the requirement to provide 
 a migration plan between nova-network and neutron, had stalled over 
 questions of implementation strategy.

 In my recollection of the conversation that followed, broad consensus was 
 reached that the costs of automating a reliable and fault-tolerant 
 migration strategy would be  considerable.  The technical complexity of 
 targeting a fixed deployment scenario would be challenging enough, and 
 targeting heterogenous scenarios would magnify that complexity many-fold.  
 Given the cost and high risks associated with implementing an automated 
 solution, everyone seemed to agree that it was not worth pursuing.  Our 
 understanding was that not pursuing an automated solution could still be in 
 keeping with the TC’s requirements for deprecation, which required that a 
 migration plan be formulated but not that it be automated.  Documentation 
 was deemed sufficient, and that was to be the path forward in covering Gap 
 6.  The documentation would allow deployers and operators to devise 
 migration strategies to suit their individual requirements.

 Then, when the Kilo summit schedule was announced, there was a session 
 scheduled in the nova track for discussing how to implement an automated 
 migration.  I only managed to catch the tail end of the session, but the 
 etherpad [2] makes no mention of the rationale for requiring an automated 
 migration in the first place.  It was like the discussion at the mid-cycle, 
 and all the talk of the risks outweighing the potential benefits of such an 
 effort, had simply not occurred.

 So, in the interests of a full and open discussion on this matter, can 
 someone please explain to me why the risks discussed at the mid-cycle were 
 suddenly deemed justifiable, seemingly against all technical rationale?  
 Criticism has been leveled at the neutron project for our supposed inaction 
 in implementing an automated solution, and I don’t think I’m the only one 
 who is concerned that this is an unreasonable requirement imposed without 
 due consideration to the risks involved.  Yes, most of us want to see 
 nova-network deprecated, but why is the lack of migration automation 
 blocking that?  An automated migration was not a requirement in the TC’s 
 original assessment of the preconditions for deprecation.  I think that if 
 neutron is deemed to be of sufficiently high quality that it can serve as 
 an effective replacement for nova-network, and we can document a migration 
 plan between them, then deprecation should proceed.


 Maru

 The crux of it comes from the fact that the operator voice (especially
 those folks with large nova-network deploys) wasn't represented there.
 Once we got back from the mid-cycle and brought it to the list, there
 was some very understandable push back on deprecating without a
 migration plan.
 
 I think it’s clear that a migration plan is required.  An automated 
 migration, not so much.
 

 I believe we landed at the need for the common case, flat multi host
 networking, to be migrated to something equivalent in neutron land
 (dvr?). And it needs to be something that Metacloud and CERN can get
 behind, as they represent 2 very large nova-network deploys (and have
 reasonably well defined down time allowances for this).

 This doesn't have to be automation for all cases, but we need to support
 a happy path from one to the other that's repeatable, reasonably
 automatic (as much as possible), and provides minimum down time for
 guests running on the environment.
 
 The fact that operators running nova-network would like the upstream 
 community to pay for implementing an automated migration solution for them is 
 hardly surprising.  It is less clear to me that implementing such a solution, 
 with all the attendant cost and risks, should take priority over efforts that 
 benefit a broader swath of the community.  Are the operators in question so 
 strapped for resources that they are not able to automate their migrations 
 themselves, provided a sufficiently detailed plan to do so?

Maru,

This effort does benefit a broad swath of the community.


Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Request Spec Freeze Exception

2015-01-08 Thread Robert Li (baoli)
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Nova: Add spec for VIF Driver for SR-IOV 
InfiniBandhttps://review.openstack.org/131729
https://review.openstack.org/#/c/131729/

Thanks for your kindly consideration.

—Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-08 Thread Kevin Benton
Is there another openstack service that allows this so we can make the API
consistent between the two when this change is made?

On Thu, Jan 8, 2015 at 3:09 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 I added a link to @Jack's post to the ML to the bug report [1].  I am
 willing to support @Itsuro with reviews of the implementation and am
 willing to consult if you need and would like to ping me.

 Carl

 [1] https://bugs.launchpad.net/neutron/+bug/1408488

 On Thu, Jan 8, 2015 at 7:49 AM, McCann, Jack jack.mcc...@hp.com wrote:
  +1 on need for this feature
 
  The way I've thought about this is we need a mode that stops the
 *automatic*
  scheduling of routers/dhcp-servers to specific hosts/agents, while
 allowing
  manual assignment of routers/dhcp-servers to those hosts/agents, and
 where
  any existing routers/dhcp-servers on those hosts continue to operate as
 normal.
 
  The maintenance use case was mentioned: I want to evacuate
 routers/dhcp-servers
  from a host before taking it down, and having the scheduler add new
 routers/dhcp
  while I'm evacuating the node is a) an annoyance, and b) causes a
 service blip
  when I have to right away move that new router/dhcp to another host.
 
  The other use case is adding a new host/agent into an existing
 environment.
  I want to be able to bring the new host/agent up and into the neutron
 config, but
  I don't want any of my customers' routers/dhcp-servers scheduled there
 until I've
  had a chance to assign some test routers/dhcp-servers and make sure the
 new server
  is properly configured and fully operational.
 
  - Jack
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest Bug triage

2015-01-08 Thread GHANSHYAM MANN
All,
As we all know bug triage rotation is working well in QA and we keep new
bug count low.

Thanks everyone for signing up in bug triage rotation.

To continue the same strategy and keeping good progress on bugs, we need
more volunteers to sign-up for coming weeks.

People who want to help in bug triage, feel free to put your name in
https://etherpad.openstack.org/p/qa-bug-triage-rotation

Thanks
gmann

On Fri, Sep 12, 2014 at 4:52 AM, David Kranz dkr...@redhat.com wrote:

 So we had a Bug Day this week and the results were a bit disappointing due
 to lack of participation. We went from 124 New bugs to 75. There were also
 many cases where bugs referred to logs that no longer existed. This
 suggests that we really need to keep up with bug triage in real time. Since
 bug triage should involve the Core review team, we propose to rotate the
 responsibility of triaging bugs weekly. I put up an etherpad here
 https://etherpad.openstack.org/p/qa-bug-triage-rotation and I hope the
 tempest core review team will sign up. Given our size, this should involve
 signing up once every two months or so. I took next week.

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks  Regards
Ghanshyam Mann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] offlist: The scope of OpenStack wiki [all]

2015-01-08 Thread Anne Gentle
Hi Stef, thanks for writing this up. One aspect this proposal doesn't
address is the ungoverned content for projects that are either in
stackforge, pre-stackforge, incubating, or have no intention of any
governance but want to use the openstack wiki. What can we do if one of
those groups raises the issue? We can talk more about it tomorrow but it's
problematic. Not unsolvable but lack of governance is one reason to be on
the wiki.
Anne

On Thu, Jan 8, 2015 at 12:31 PM, Stefano Maffulli stef...@openstack.org
wrote:

 hello folks,

 TL;DR Many wiki pages and categories are maintained elsewhere and to
 avoid confusion to newcomers we need to agree on a new scope for the
 wiki. A suggestion below is to limit its scope to content that doesn't
 need/want peer-review and is not hosted elsewhere (no duplication).

 The wiki served for many years the purpose of 'poor man CMS' when we
 didn't have an easy way to collaboratively create content. So the wiki
 ended up hosting pages like 'Getting started with OpenStack', demo
 videos, How to contribute, mission, to document our culture / shared
 understandings (4 opens, release cycle, use of blueprints, stable branch
 policy...), to maintain the list of Programs, meetings/teams, blueprints
 and specs, lots of random documentation and more.

 Lots of the content originally placed on the wiki was there because
 there was no better place. Now that we have more mature content and
 processes, these are finding their way out of the wiki like:

   * http://governance.openstack.org
   * http://specs.openstack.org
   * http://docs.openstack.org/infra/manual/

 Also, the Introduction to OpenStack is maintained on
 www.openstack.org/software/ together with introductory videos and other
 basic material. A redesign of openstack.org/community and the new portal
 groups.openstack.org are making even more wiki pages obsolete.

 This makes the wiki very confusing to newcomers and more likely to host
 conflicting information.

 I would propose to restrict the scope of the wiki to things that
 anything that don't need or want to be peer-reviewed. Things like:

   * agendas for meetings, sprints, etc
   * list of etherpads for summits
   * quick prototypes of new programs (mentors, upstream training) before
 they find a stable home (which can still be the wiki)

 Also, documentation for contributors and users should not be on the
 wiki, but on docs.openstack.org (where it can be found more easily).

 If nobody objects, I'll start by proposing a new home page design and
 start tagging content that may be moved elsewhere.

 /stef


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The scope of OpenStack wiki [all]

2015-01-08 Thread Stefano Maffulli
On Thu, 2015-01-08 at 12:52 -0800, Sean Roberts wrote:
 Thanks for bringing this up Stef. I have found while introducing
 OpenStack fundamentals in the user groups, what seems logical to us,
 the fully OpenStack immersed, is confusing to newcomers. 

Yeah, I'm diving more in the wiki in preparation for a redesign for
docs.openstack.org/developer (which I think should be called
contributor/ in order to avoid collision with developer.openstack.org).
The How_to_contribute page on the wiki is way past its time and needs to
be gutted, too.

 A specific example comes from the How To Contribute wikipage.
 Explaining the corporate CLA and CCLA link was moved to the infra
 manual. It is cleaner presentation of the information for sure. It
 also left the google searchable wiki links hanging. This is the
 primary way most newcomers will look for information.  It wasn't easy
 to find the information once it was brought to my attention it was
 missing. I fixed it up pretty easily after that. 
 
Indeed, this is something I've experienced myself. Moving content
around, changing processes, etc are things we should all be doing with
care allowing time and constant communication across multiple channels
(and taking care of proper http redirects, when possible, to instruct
web spiders, too).

In order to better map the anchors we had on the wiki, I've suggested to
add a subsection to the developer guide in infra-manual:
https://review.openstack.org/145971

so that we can have something more precise for CLA then the general link
to 
http://docs.openstack.org/infra/manual/developers.html#account-setup

I've also moved most of the content from CLA-FAQ into a set of FAQ on
Ask OpenStack, since that site gets lots more hits from search engines:

https://ask.openstack.org/en/questions/scope:all/sort:activity-desc/tags:cla,faq/page:1/

/stef


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-08 Thread Sean Dague
On 01/08/2015 06:41 PM, Maru Newby wrote:
 As per a recent exchange on #openstack-neutron, I’ve been asked to present my 
 views on this effort.  What follows is in no way intended to detract from the 
 hard work and dedication of those undertaking it, but I think that their 
 energy could be better spent.
 
 At nova’s juno mid-cycle in July, there was a discussion about deprecating 
 nova-network.  Most of the work-items on the TC’s gap analysis [1] had been 
 covered off, with one notable exception: Gap 6, the requirement to provide a 
 migration plan between nova-network and neutron, had stalled over questions 
 of implementation strategy.
 
 In my recollection of the conversation that followed, broad consensus was 
 reached that the costs of automating a reliable and fault-tolerant migration 
 strategy would be  considerable.  The technical complexity of targeting a 
 fixed deployment scenario would be challenging enough, and targeting 
 heterogenous scenarios would magnify that complexity many-fold.  Given the 
 cost and high risks associated with implementing an automated solution, 
 everyone seemed to agree that it was not worth pursuing.  Our understanding 
 was that not pursuing an automated solution could still be in keeping with 
 the TC’s requirements for deprecation, which required that a migration plan 
 be formulated but not that it be automated.  Documentation was deemed 
 sufficient, and that was to be the path forward in covering Gap 6.  The 
 documentation would allow deployers and operators to devise migration 
 strategies to suit their individual requirements.
 
 Then, when the Kilo summit schedule was announced, there was a session 
 scheduled in the nova track for discussing how to implement an automated 
 migration.  I only managed to catch the tail end of the session, but the 
 etherpad [2] makes no mention of the rationale for requiring an automated 
 migration in the first place.  It was like the discussion at the mid-cycle, 
 and all the talk of the risks outweighing the potential benefits of such an 
 effort, had simply not occurred.
 
 So, in the interests of a full and open discussion on this matter, can 
 someone please explain to me why the risks discussed at the mid-cycle were 
 suddenly deemed justifiable, seemingly against all technical rationale?  
 Criticism has been leveled at the neutron project for our supposed inaction 
 in implementing an automated solution, and I don’t think I’m the only one who 
 is concerned that this is an unreasonable requirement imposed without due 
 consideration to the risks involved.  Yes, most of us want to see 
 nova-network deprecated, but why is the lack of migration automation blocking 
 that?  An automated migration was not a requirement in the TC’s original 
 assessment of the preconditions for deprecation.  I think that if neutron is 
 deemed to be of sufficiently high quality that it can serve as an effective 
 replacement for nova-network, and we can document a migration plan between 
 them, then deprecation should proceed.
 
 
 Maru

The crux of it comes from the fact that the operator voice (especially
those folks with large nova-network deploys) wasn't represented there.
Once we got back from the mid-cycle and brought it to the list, there
was some very understandable push back on deprecating without a
migration plan.

I believe we landed at the need for the common case, flat multi host
networking, to be migrated to something equivalent in neutron land
(dvr?). And it needs to be something that Metacloud and CERN can get
behind, as they represent 2 very large nova-network deploys (and have
reasonably well defined down time allowances for this).

This doesn't have to be automation for all cases, but we need to support
a happy path from one to the other that's repeatable, reasonably
automatic (as much as possible), and provides minimum down time for
guests running on the environment.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-08 Thread Maru Newby
As per a recent exchange on #openstack-neutron, I’ve been asked to present my 
views on this effort.  What follows is in no way intended to detract from the 
hard work and dedication of those undertaking it, but I think that their energy 
could be better spent.

At nova’s juno mid-cycle in July, there was a discussion about deprecating 
nova-network.  Most of the work-items on the TC’s gap analysis [1] had been 
covered off, with one notable exception: Gap 6, the requirement to provide a 
migration plan between nova-network and neutron, had stalled over questions of 
implementation strategy.

In my recollection of the conversation that followed, broad consensus was 
reached that the costs of automating a reliable and fault-tolerant migration 
strategy would be  considerable.  The technical complexity of targeting a fixed 
deployment scenario would be challenging enough, and targeting heterogenous 
scenarios would magnify that complexity many-fold.  Given the cost and high 
risks associated with implementing an automated solution, everyone seemed to 
agree that it was not worth pursuing.  Our understanding was that not pursuing 
an automated solution could still be in keeping with the TC’s requirements for 
deprecation, which required that a migration plan be formulated but not that it 
be automated.  Documentation was deemed sufficient, and that was to be the path 
forward in covering Gap 6.  The documentation would allow deployers and 
operators to devise migration strategies to suit their individual requirements.

Then, when the Kilo summit schedule was announced, there was a session 
scheduled in the nova track for discussing how to implement an automated 
migration.  I only managed to catch the tail end of the session, but the 
etherpad [2] makes no mention of the rationale for requiring an automated 
migration in the first place.  It was like the discussion at the mid-cycle, and 
all the talk of the risks outweighing the potential benefits of such an effort, 
had simply not occurred.

So, in the interests of a full and open discussion on this matter, can someone 
please explain to me why the risks discussed at the mid-cycle were suddenly 
deemed justifiable, seemingly against all technical rationale?  Criticism has 
been leveled at the neutron project for our supposed inaction in implementing 
an automated solution, and I don’t think I’m the only one who is concerned that 
this is an unreasonable requirement imposed without due consideration to the 
risks involved.  Yes, most of us want to see nova-network deprecated, but why 
is the lack of migration automation blocking that?  An automated migration was 
not a requirement in the TC’s original assessment of the preconditions for 
deprecation.  I think that if neutron is deemed to be of sufficiently high 
quality that it can serve as an effective replacement for nova-network, and we 
can document a migration plan between them, then deprecation should proceed.


Maru


1: 
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee/Neutron_Gap_Coverage
2: https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron

 On Dec 19, 2014, at 8:59 AM, Anita Kuno ante...@anteaya.info wrote:
 
 Rather than waste your time making excuses let me state where we are and
 where I would like to get to, also sharing my thoughts about how you can
 get involved if you want to see this happen as badly as I have been told
 you do.
 
 Where we are:
* a great deal of foundation work has been accomplished to achieve
 parity with nova-network and neutron to the extent that those involved
 are ready for migration plans to be formulated and be put in place
* a summit session happened with notes and intentions[0]
* people took responsibility and promptly got swamped with other
 responsibilities
* spec deadlines arose and in neutron's case have passed
* currently a neutron spec [1] is a work in progress (and it needs
 significant work still) and a nova spec is required and doesn't have a
 first draft or a champion
 
 Where I would like to get to:
* I need people in addition to Oleg Bondarev to be available to help
 come up with ideas and words to describe them to create the specs in a
 very short amount of time (Oleg is doing great work and is a fabulous
 person, yay Oleg, he just can't do this alone)
* specifically I need a contact on the nova side of this complex
 problem, similar to Oleg on the neutron side
* we need to have a way for people involved with this effort to find
 each other, talk to each other and track progress
* we need to have representation at both nova and neutron weekly
 meetings to communicate status and needs
 
 We are at K-2 and our current status is insufficient to expect this work
 will be accomplished by the end of K-3. I will be championing this work,
 in whatever state, so at least it doesn't fall off the map. If you would
 like to help this effort please get in contact. I will be thinking of
 ways to 

Re: [openstack-dev] [Manila] Driver modes, share-servers, and clustered backends

2015-01-08 Thread Li, Chen
Thanks for the explanations! 
Really helpful.

My questions are added in line.

Thanks.
-chen

-Original Message-
From: Ben Swartzlander [mailto:b...@swartzlander.org] 
Sent: Friday, January 09, 2015 6:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Driver modes, share-servers, and clustered 
backends

There has been some confusion on the topic of driver modes and share-server, 
especially as they related to storage controllers with multiple physical nodes, 
so I will try to clear up the confusion as much as I can.

Manila has had the concept of share-servers since late icehouse. This feature 
was added to solve 3 problems:
1) Multiple drivers were creating storage VMs / service VMs as a side effect of 
share creation and Manila didn't offer any way to manage or even know about 
these VMs that were created.
2) Drivers needed a way to keep track of (persist) what VMs they had created

== so, a corresponding relationship do exist between share server and virtual 
machines.  

3) We wanted to standardize across drivers what these VMs looked like to Manila 
so that the scheduler and share-manager could know about them

==Q, why scheduler and share-manager need to know them ?

It's important to recognize that from Manila's perspective, all a share-server 
is is a container for shares that's tied to a share network and it also has 
some network allocations. It's also important to know that each share-server 
can have zero, one, or multiple IP addresses and can exist on an arbitrary 
large number of physical nodes, and the actual form that a share-server takes 
is completely undefined.

During Juno, drivers that didn't explicity support the concept of share-servers 
basically got a dummy share server created which acted as a giant container for 
all the shares that backend created. This worked okay, but it was informal and 
not documented, and it made some of the things we want to do in kilo impossible.

== Q, what things are impossible?  Dummy share server solution make sense to 
me. 

To solve the above problem I proposed driver modes. Initially I proposed
3 modes:
1) single_svm
2) flat_multi_svm
3) managed_multi_svm

Mode (1) was supposed to correspond to driver that didn't deal with share 
servers, and modes (2) and (3) were for drivers that did deal with share 
servers, where the difference between those 2 modes came down to networking 
details. We realized that (2) can be implemented as a special case of (3) so we 
collapsed the modes down to 2 and that's what's merged upstream now.

== driver that didn't deal with share servers   
  =  
https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver
  = This is where I get totally lost.
  = Because for generic driver, it is not create and delete share servers and 
its related network, but would still use a share server(the service VM) .
  = The share (the cinder volume) need to attach to an instance no matter what 
the driver mode is.
  = I think use is some kind of deal too.

The specific names we settled on (single_svm and multi_svm) were perhaps poorly 
chosen, because svm is not a term we've used officially (unofficially we do 
talk about storage VMs and service VMs and the svm term captured both concepts 
nicely) and as some have pointed out, even multi and single aren't completely 
accurate terms because what we mean when we say single_svm is that the driver 
doesn't create/destroy share servers, it uses something created externally.

== If we use svm instead of share server in code, I'm ok with svm. I'd 
like mode name and code implementation is consistent.

So one thing I want everyone to understand is that you can have a single_svm 
driver which is implemented by a large cluster of storage controllers, and you 
have have a multi_svm driver which is implemented a single box with some form 
of network and service virtualization. The two concepts are orthagonal.

The other thing we need to decide (hopefully at our upcoming Jan 15
meeting) is whether to change the mode names and if so what to change them to. 
I've created the following etherpad with all of the suggestions I've heard so 
far and the my feedback on each:
https://etherpad.openstack.org/p/manila-driver-modes-discussion

-Ben Swartzlander


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] serial-console *replaces* console-log file?

2015-01-08 Thread Lingxian Kong
There is an excellent post describing this, for your information:
http://blog.oddbit.com/2014/12/22/accessing-the-serial-console-of-your-nova-servers/

2015-01-07 22:38 GMT+08:00 Markus Zoeller mzoel...@de.ibm.com:
 The blueprint serial-ports introduced a serial console connection
 to an instance via websocket. I'm wondering
 * why enabling the serial console *replaces* writing into log file [1]?
 * how one is supposed to retrieve the boot messages *before* one connects?

 The replacement of the log file has impact on the os-console-output
 API [2]. The CLI command `nova console-log instance-name` shows:
 ERROR (ClientException): The server has either erred or is incapable
 of performing the requested operation. (HTTP 500)
 Horizon shows in its Log tab of an instance
 Unable to get log for instance uuid.

 Would it be good to have both, the serial console *and* the console log
 file?


 [1]
 https://review.openstack.org/#/c/113960/14/nova/virt/libvirt/driver.py,cm
 [2]
 http://developer.openstack.org/api-ref-compute-v2-ext.html#ext-os-console-output


 Regards,
 Markus Zoeller
 IRC: markus_z
 Launchpad: mzoeller


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-08 Thread Morgan Fainberg

 On Jan 8, 2015, at 3:56 PM, Sean Dague s...@dague.net wrote:
 
 On 01/08/2015 06:29 PM, Morgan Fainberg wrote:
 As of Juno all projects are using the new keystonemiddleware package for 
 auth_token middleware. Recently we’ve been running into issues with 
 maintenance of the now frozen (and deprecated) 
 keystoneclient.middleware.auth_token code. Ideally all deployments should 
 move over to the new package. In some cases this may or may not be as 
 feasible due to requirement changes when using the new middleware package on 
 particularly old deployments (Grizzly, Havana, etc).
 
 The Keystone team is looking for the best way to support our deployer 
 community. In a perfect world we would be able to convert icehouse 
 deployments to the new middleware package and instruct deployers to use 
 either an older keystoneclient or convert to keystonemiddleware if they want 
 the newest keystoneclient lib (regardless of their deployment release). For 
 releases older than Icehouse (EOLd) there is no way to communicate in the 
 repositories/tags a change to require keystonemiddleware.
 
 There are 2 viable options to get to where we only have one version of the 
 keystonemiddleware to maintain (which for a number of reasons, primarily 
 relating to security concerns is important).
 
 1) Work to update Icehouse to include the keystonemiddleware package for the 
 next stable release. Sometime after this stable release remove the 
 auth_token (and other middlewares) from keystoneclient. The biggest downside 
 is this adds new dependencies in an old release, which is poor for packaging 
 and deployers (making sure paste-ini is updated etc).
 
 2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. 
 This is a better experience for our deployer base, but does not solve the 
 issues around solid testing with the auth_token middleware from 
 keystoneclient (except for the stable-icehouse devstack-gate jobs).
 
 I am looking for insight, preferences, and other options from the community 
 and the TC. I will propose this topic for the next TC meeting so that we can 
 have a clear view on how to handle this in the most appropriate way that 
 imparts the best balance between maintainability, security, and experience 
 for the OpenStack providers, deployers, and users.
 
 So, ignoring the code a bit for a second, what are the interfaces which
 are exposed that we're going to run into a breaking change here?
 
   -Sean
 


There are some configuration options provided by auth_token middleware and the 
paste-ini files load keystoneclient.middleware.auth_token to handle validation 
of Tokens then converting the token data to an auth_context passed down to the 
service.

There are no external (should be no) interfaces beyond the __call__ of the 
middleware and options themselves.

Cheers,
—Morgan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread vishnu
Steve,

Auto recovery is the plan. Engine failure should be detected by way of
heartbeat or recover partially realised stack on engine startup in case of
a single engine scenario.

--continue command was just a additional helper api.






[image: --]

Visnusaran Murugan
[image: http://]about.me/ckmvishnu
http://about.me/ckmvishnu?promo=email_sig



On Thu, Jan 8, 2015 at 11:29 PM, Steven Hardy sha...@redhat.com wrote:

 On Thu, Jan 08, 2015 at 09:53:02PM +0530, vishnu wrote:
 Hi Zane,
 I was wondering if we could push changes relating to backup stack
 removal
 and to not load resources as part of stack. There needs to be a
 capability
 to restart jobs left over by dead engines.A
 something like heat stack-operation --continue [git rebase --continue]

 To me, it's pointless if the user has to restart the operation, they can do
 that already, e.g by triggering a stack update after a failed stack create.

 The process needs to be automatic IMO, if one engine dies, another engine
 should detect that it needs to steal the lock or whatever and continue
 whatever was in-progress.

 Had a chat with shady regarding this. IMO this would be a valuable
 enhancement. Notification based lead sharing can be taken up upon
 completion.

 I was referring to a capability for the service to transparently recover
 if, for example, a heat-engine is restarted during a service upgrade.

 Currently, users will be impacted in this situation, and making them
 manually restart failed operations doesn't seem like a super-great solution
 to me (like I said, they can already do that to some extent)

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread Murugan, Visnusaran
Steve,

My reasoning to have a “--continue” like functionality was to run it as a 
periodic task and substitute continuous observer for now.

“--continue” based command should work on realized vs. actual graph and issue a 
stack update.

I completely agree that user action should not be needed to realize a partially 
completed stack.

Your thoughts.

From: vishnu [mailto:ckmvis...@gmail.com]
Sent: Friday, January 9, 2015 10:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

Steve,

Auto recovery is the plan. Engine failure should be detected by way of 
heartbeat or recover partially realised stack on engine startup in case of a 
single engine scenario.

--continue command was just a additional helper api.








[http://d13pix9kaak6wt.cloudfront.net/signature/me-badge.png]



Visnusaran Murugan
about.me/ckmvishnu









On Thu, Jan 8, 2015 at 11:29 PM, Steven Hardy 
sha...@redhat.commailto:sha...@redhat.com wrote:
On Thu, Jan 08, 2015 at 09:53:02PM +0530, vishnu wrote:
Hi Zane,
I was wondering if we could push changes relating to backup stack removal
and to not load resources as part of stack. There needs to be a capability
to restart jobs left over by dead engines.A
something like heat stack-operation --continue [git rebase --continue]

To me, it's pointless if the user has to restart the operation, they can do
that already, e.g by triggering a stack update after a failed stack create.

The process needs to be automatic IMO, if one engine dies, another engine
should detect that it needs to steal the lock or whatever and continue
whatever was in-progress.

Had a chat with shady regarding this. IMO this would be a valuable
enhancement. Notification based lead sharing can be taken up upon
completion.

I was referring to a capability for the service to transparently recover
if, for example, a heat-engine is restarted during a service upgrade.

Currently, users will be impacted in this situation, and making them
manually restart failed operations doesn't seem like a super-great solution
to me (like I said, they can already do that to some extent)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-08 Thread Morgan Fainberg


 On Jan 8, 2015, at 16:10, Sean Dague s...@dague.net wrote:
 
 On 01/08/2015 07:01 PM, Morgan Fainberg wrote:
 
 On Jan 8, 2015, at 3:56 PM, Sean Dague s...@dague.net wrote:
 
 On 01/08/2015 06:29 PM, Morgan Fainberg wrote:
 As of Juno all projects are using the new keystonemiddleware package for 
 auth_token middleware. Recently we’ve been running into issues with 
 maintenance of the now frozen (and deprecated) 
 keystoneclient.middleware.auth_token code. Ideally all deployments should 
 move over to the new package. In some cases this may or may not be as 
 feasible due to requirement changes when using the new middleware package 
 on particularly old deployments (Grizzly, Havana, etc).
 
 The Keystone team is looking for the best way to support our deployer 
 community. In a perfect world we would be able to convert icehouse 
 deployments to the new middleware package and instruct deployers to use 
 either an older keystoneclient or convert to keystonemiddleware if they 
 want the newest keystoneclient lib (regardless of their deployment 
 release). For releases older than Icehouse (EOLd) there is no way to 
 communicate in the repositories/tags a change to require 
 keystonemiddleware.
 
 There are 2 viable options to get to where we only have one version of the 
 keystonemiddleware to maintain (which for a number of reasons, primarily 
 relating to security concerns is important).
 
 1) Work to update Icehouse to include the keystonemiddleware package for 
 the next stable release. Sometime after this stable release remove the 
 auth_token (and other middlewares) from keystoneclient. The biggest 
 downside is this adds new dependencies in an old release, which is poor 
 for packaging and deployers (making sure paste-ini is updated etc).
 
 2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. 
 This is a better experience for our deployer base, but does not solve the 
 issues around solid testing with the auth_token middleware from 
 keystoneclient (except for the stable-icehouse devstack-gate jobs).
 
 I am looking for insight, preferences, and other options from the 
 community and the TC. I will propose this topic for the next TC meeting so 
 that we can have a clear view on how to handle this in the most 
 appropriate way that imparts the best balance between maintainability, 
 security, and experience for the OpenStack providers, deployers, and users.
 
 So, ignoring the code a bit for a second, what are the interfaces which
 are exposed that we're going to run into a breaking change here?
 
-Sean
 
 
 There are some configuration options provided by auth_token middleware and 
 the paste-ini files load keystoneclient.middleware.auth_token to handle 
 validation of Tokens then converting the token data to an auth_context 
 passed down to the service.
 
 There are no external (should be no) interfaces beyond the __call__ of the 
 middleware and options themselves.
 
 Ok, so ... if this isn't out on the network, is the only reason this is
 an issue is that keystoneclient version is unbounded in the stable branches?
 

That is a fair assessment of the situation. 

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Group-based-policy] New addition to the core team

2015-01-08 Thread Sumit Naiksatam
Hi, I would like to propose Magesh GV (magesh-gv) to the Group-based
Policy (GBP) core team based on his excellent contribution to the
project. We discussed this during the weekly IRC meeting [1] and the
current core team unanimously supports this. Let us know if there are
any objections, otherwise Magesh, welcome to the core team!

Thanks,
~Sumit.
[1] 
http://eavesdrop.openstack.org/meetings/networking_policy/2015/networking_policy.2015-01-08-18.08.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-08 Thread Jerry Xinyu Zhao
tuskar-ui is supposed to enroll nodes into ironic.

On Thu, Jan 8, 2015 at 4:36 AM, Zhou, Zhenzan zhenzan.z...@intel.com
wrote:

 Sounds like we could add something new to automate the enrollment of new
 nodes:-)
 Collecting IPMI info into a csv file is still a trivial job...

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Thursday, January 8, 2015 5:19 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

 On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS RD) wrote:
  My understanding of discovery was to get all details for a node and then
 register that node to ironic. i.e. Enrollment of the node to ironic. Pardon
 me if it was out of line with your understanding of discovery.
 That's why we agreed to use terms inspection/introspection :) sorry for
 not being consistent here (name 'discoverd' is pretty old and hard to
 change).

 discoverd does not enroll nodes. while possible, I'm somewhat resistant to
 make it do enrolling, mostly because I want it to be user-controlled
 process.

 
  What I understand from the below mentioned spec is that the Node is
 registered, but the spec will help ironic discover other properties of the
 node.
 that's what discoverd does currently.

 
  -Om
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com]
  Sent: 07 January 2015 20:20
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update
 
  On 01/07/2015 03:44 PM, Matt Keenan wrote:
  On 01/07/15 14:24, Kumar, Om (Cloud OS RD) wrote:
  If it's a separate project, can it be extended to perform out of band
  discovery too..? That way there will be a single service to perform
  in-band as well as out of band discoveries.. May be it could follow
  driver framework for discovering nodes, where one driver could be
  native (in-band) and other could be iLO specific etc...
 
 
  I believe the following spec outlines plans for out-of-band discovery:
  https://review.openstack.org/#/c/100951/
  Right, so Ironic will have drivers, one of which (I hope) will be a
 driver for discoverd.
 
 
  No idea what the progress is with regard to implementation within the
  Kilo cycle though.
  For now we hope to get it merged in K.
 
 
  cheers
 
  Matt
 
  Just a thought.
 
  -Om
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com]
  Sent: 07 January 2015 14:34
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update
 
  On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:
  So is it possible to just integrate this project into ironic? I mean
  when you create an ironic node, it will start discover in the
  background. So we don't need two services?
  Well, the decision on the summit was that it's better to keep it
  separate. Please see https://review.openstack.org/#/c/135605/ for
  details on future interaction between discoverd and Ironic.
 
  Just a thought, thanks.
 
  BR
  Zhou Zhenzan
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com]
  Sent: Monday, January 5, 2015 4:49 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update
 
  On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
  Hi, Dmitry
 
  I think this is a good project.
  I got one question: what is the relationship with
 ironic-python-agent?
  Thanks.
  Hi!
 
  No relationship right now, but I'm hoping to use IPA as a base for
  introspection ramdisk in the (near?) future.
 
  BR
  Zhou Zhenzan
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com]
  Sent: Thursday, December 11, 2014 10:35 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Ironic] ironic-discoverd status update
 
  Hi all!
 
  As you know I actively promote ironic-discoverd project [1] as one
  of the means to do hardware inspection for Ironic (see e.g. spec
  [2]), so I decided it's worth to give some updates to the community
  from time to time. This email is purely informative, you may safely
  skip it, if you're not interested.
 
  Background
  ==
 
  The discoverd project (I usually skip the ironic- part when
  talking about it) solves the problem of populating information
  about a node in Ironic database without help of any vendor-specific
  tool. This information usually includes Nova scheduling properties
  (CPU, RAM, disk
  size) and MAC's for ports.
 
  Introspection is done by booting a ramdisk on a node, collecting
  data there and posting it back to discoverd HTTP API. Thus actually
  discoverd consists of 2 components: the service [1] and the ramdisk
  [3]. The service handles 2 major tasks:
  * Processing data posted by the ramdisk, i.e. finding the node in
  Ironic database and updating node properties with new data.
  * Managing iptables so that the default 

Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-08 Thread Morgan Fainberg
That was a copy paste error. The response was meant to be:

Yes, that is the issue, unbounded version on the stable branches. 

--Morgan

Sent via mobile

 On Jan 8, 2015, at 22:57, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
 
 
 On Jan 8, 2015, at 16:10, Sean Dague s...@dague.net wrote:
 
 On 01/08/2015 07:01 PM, Morgan Fainberg wrote:
 
 On Jan 8, 2015, at 3:56 PM, Sean Dague s...@dague.net wrote:
 
 On 01/08/2015 06:29 PM, Morgan Fainberg wrote:
 As of Juno all projects are using the new keystonemiddleware package for 
 auth_token middleware. Recently we’ve been running into issues with 
 maintenance of the now frozen (and deprecated) 
 keystoneclient.middleware.auth_token code. Ideally all deployments should 
 move over to the new package. In some cases this may or may not be as 
 feasible due to requirement changes when using the new middleware package 
 on particularly old deployments (Grizzly, Havana, etc).
 
 The Keystone team is looking for the best way to support our deployer 
 community. In a perfect world we would be able to convert icehouse 
 deployments to the new middleware package and instruct deployers to use 
 either an older keystoneclient or convert to keystonemiddleware if they 
 want the newest keystoneclient lib (regardless of their deployment 
 release). For releases older than Icehouse (EOLd) there is no way to 
 communicate in the repositories/tags a change to require 
 keystonemiddleware.
 
 There are 2 viable options to get to where we only have one version of 
 the keystonemiddleware to maintain (which for a number of reasons, 
 primarily relating to security concerns is important).
 
 1) Work to update Icehouse to include the keystonemiddleware package for 
 the next stable release. Sometime after this stable release remove the 
 auth_token (and other middlewares) from keystoneclient. The biggest 
 downside is this adds new dependencies in an old release, which is poor 
 for packaging and deployers (making sure paste-ini is updated etc).
 
 2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. 
 This is a better experience for our deployer base, but does not solve the 
 issues around solid testing with the auth_token middleware from 
 keystoneclient (except for the stable-icehouse devstack-gate jobs).
 
 I am looking for insight, preferences, and other options from the 
 community and the TC. I will propose this topic for the next TC meeting 
 so that we can have a clear view on how to handle this in the most 
 appropriate way that imparts the best balance between maintainability, 
 security, and experience for the OpenStack providers, deployers, and 
 users.
 
 So, ignoring the code a bit for a second, what are the interfaces which
 are exposed that we're going to run into a breaking change here?
 
   -Sean
 
 
 There are some configuration options provided by auth_token middleware and 
 the paste-ini files load keystoneclient.middleware.auth_token to handle 
 validation of Tokens then converting the token data to an auth_context 
 passed down to the service.
 
 There are no external (should be no) interfaces beyond the __call__ of the 
 middleware and options themselves.
 
 Ok, so ... if this isn't out on the network, is the only reason this is
 an issue is that keystoneclient version is unbounded in the stable branches?
 
 That is a fair assessment of the situation. 
 
 --Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [devstack] Opensatck installation issue.

2015-01-08 Thread liuxinguo
Hi Abhishek,

For the error in the first line:
“mkdir: cannot create directory `/logs': Permission denied”
and the error at the end:
“ln: failed to create symbolic link `/logs/screen/screen-key.log': No such file 
or directory”

The stack user does not have the permission on “/” so it can not create 
directory `/logs'.

Please check the permission.

liu

发件人: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
发送时间: 2015年1月9日 15:26
收件人: OpenStack Development Mailing List (not for usage questions)
主题: [openstack-dev] [devstack] Opensatck installation issue.

Hi,

I'm trying to install Openstack through devstack master on my Ubuntu 12.04 VM, 
but it is failing and generating the following error.

If anyone can help me resolving this issue please do reply.

--
Thanks  Regards,
Abhishek
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread Huangtianhua


发件人: Angus Salkeld [mailto:asalk...@mirantis.com]
发送时间: 2015年1月9日 14:08
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence


On Fri, Jan 9, 2015 at 3:22 PM, Murugan, Visnusaran 
visnusaran.muru...@hp.commailto:visnusaran.muru...@hp.com wrote:
Steve,

My reasoning to have a “--continue” like functionality was to run it as a 
periodic task and substitute continuous observer for now.

I am not in favor of the --continue as an API. I'd suggest responding to 
resource timeouts and if there is no response from the task, then re-start 
(continue)
the task.

-Angus


+1 Agree with Angus:)

“--continue” based command should work on realized vs. actual graph and issue a 
stack update.

I completely agree that user action should not be needed to realize a partially 
completed stack.

Your thoughts.

From: vishnu [mailto:ckmvis...@gmail.commailto:ckmvis...@gmail.com]
Sent: Friday, January 9, 2015 10:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

Steve,

Auto recovery is the plan. Engine failure should be detected by way of 
heartbeat or recover partially realised stack on engine startup in case of a 
single engine scenario.

--continue command was just a additional helper api.








[图像已被发件人删除。]



Visnusaran Murugan
about.me/ckmvishnuhttp://about.me/ckmvishnu









On Thu, Jan 8, 2015 at 11:29 PM, Steven Hardy 
sha...@redhat.commailto:sha...@redhat.com wrote:
On Thu, Jan 08, 2015 at 09:53:02PM +0530, vishnu wrote:
Hi Zane,
I was wondering if we could push changes relating to backup stack removal
and to not load resources as part of stack. There needs to be a capability
to restart jobs left over by dead engines.A
something like heat stack-operation --continue [git rebase --continue]

To me, it's pointless if the user has to restart the operation, they can do
that already, e.g by triggering a stack update after a failed stack create.

The process needs to be automatic IMO, if one engine dies, another engine
should detect that it needs to steal the lock or whatever and continue
whatever was in-progress.

Had a chat with shady regarding this. IMO this would be a valuable
enhancement. Notification based lead sharing can be taken up upon
completion.

I was referring to a capability for the service to transparently recover
if, for example, a heat-engine is restarted during a service upgrade.

Currently, users will be impacted in this situation, and making them
manually restart failed operations doesn't seem like a super-great solution
to me (like I said, they can already do that to some extent)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread Angus Salkeld
On Fri, Jan 9, 2015 at 3:22 PM, Murugan, Visnusaran 
visnusaran.muru...@hp.com wrote:

  Steve,



 My reasoning to have a “--continue” like functionality was to run it as a
 periodic task and substitute continuous observer for now.


I am not in favor of the --continue as an API. I'd suggest responding to
resource timeouts and if there is no response from the task, then re-start
(continue)
the task.

-Angus



 “--continue” based command should work on realized vs. actual graph and
 issue a stack update.



 I completely agree that user action should not be needed to realize a
 partially completed stack.



 Your thoughts.



 *From:* vishnu [mailto:ckmvis...@gmail.com]
 *Sent:* Friday, January 9, 2015 10:08 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence



 Steve,



 Auto recovery is the plan. Engine failure should be detected by way of
 heartbeat or recover partially realised stack on engine startup in case of
 a single engine scenario.



 --continue command was just a additional helper api.














 *Visnusaran Murugan*

 about.me/ckmvishnu









 On Thu, Jan 8, 2015 at 11:29 PM, Steven Hardy sha...@redhat.com wrote:

 On Thu, Jan 08, 2015 at 09:53:02PM +0530, vishnu wrote:
 Hi Zane,
 I was wondering if we could push changes relating to backup stack
 removal
 and to not load resources as part of stack. There needs to be a
 capability
 to restart jobs left over by dead engines.A
 something like heat stack-operation --continue [git rebase --continue]

 To me, it's pointless if the user has to restart the operation, they can do
 that already, e.g by triggering a stack update after a failed stack create.

 The process needs to be automatic IMO, if one engine dies, another engine
 should detect that it needs to steal the lock or whatever and continue
 whatever was in-progress.

 Had a chat with shady regarding this. IMO this would be a valuable
 enhancement. Notification based lead sharing can be taken up upon
 completion.

 I was referring to a capability for the service to transparently recover
 if, for example, a heat-engine is restarted during a service upgrade.

 Currently, users will be impacted in this situation, and making them
 manually restart failed operations doesn't seem like a super-great solution
 to me (like I said, they can already do that to some extent)

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-08 Thread Morgan Fainberg
As of Juno all projects are using the new keystonemiddleware package for 
auth_token middleware. Recently we’ve been running into issues with maintenance 
of the now frozen (and deprecated) keystoneclient.middleware.auth_token code. 
Ideally all deployments should move over to the new package. In some cases this 
may or may not be as feasible due to requirement changes when using the new 
middleware package on particularly old deployments (Grizzly, Havana, etc).

The Keystone team is looking for the best way to support our deployer 
community. In a perfect world we would be able to convert icehouse deployments 
to the new middleware package and instruct deployers to use either an older 
keystoneclient or convert to keystonemiddleware if they want the newest 
keystoneclient lib (regardless of their deployment release). For releases older 
than Icehouse (EOLd) there is no way to communicate in the repositories/tags a 
change to require keystonemiddleware.

There are 2 viable options to get to where we only have one version of the 
keystonemiddleware to maintain (which for a number of reasons, primarily 
relating to security concerns is important).

1) Work to update Icehouse to include the keystonemiddleware package for the 
next stable release. Sometime after this stable release remove the auth_token 
(and other middlewares) from keystoneclient. The biggest downside is this adds 
new dependencies in an old release, which is poor for packaging and deployers 
(making sure paste-ini is updated etc).

2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. This 
is a better experience for our deployer base, but does not solve the issues 
around solid testing with the auth_token middleware from keystoneclient (except 
for the stable-icehouse devstack-gate jobs).

I am looking for insight, preferences, and other options from the community and 
the TC. I will propose this topic for the next TC meeting so that we can have a 
clear view on how to handle this in the most appropriate way that imparts the 
best balance between maintainability, security, and experience for the 
OpenStack providers, deployers, and users.

Cheers,
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-08 Thread Sean Dague
On 01/08/2015 07:01 PM, Morgan Fainberg wrote:
 
 On Jan 8, 2015, at 3:56 PM, Sean Dague s...@dague.net wrote:

 On 01/08/2015 06:29 PM, Morgan Fainberg wrote:
 As of Juno all projects are using the new keystonemiddleware package for 
 auth_token middleware. Recently we’ve been running into issues with 
 maintenance of the now frozen (and deprecated) 
 keystoneclient.middleware.auth_token code. Ideally all deployments should 
 move over to the new package. In some cases this may or may not be as 
 feasible due to requirement changes when using the new middleware package 
 on particularly old deployments (Grizzly, Havana, etc).

 The Keystone team is looking for the best way to support our deployer 
 community. In a perfect world we would be able to convert icehouse 
 deployments to the new middleware package and instruct deployers to use 
 either an older keystoneclient or convert to keystonemiddleware if they 
 want the newest keystoneclient lib (regardless of their deployment 
 release). For releases older than Icehouse (EOLd) there is no way to 
 communicate in the repositories/tags a change to require keystonemiddleware.

 There are 2 viable options to get to where we only have one version of the 
 keystonemiddleware to maintain (which for a number of reasons, primarily 
 relating to security concerns is important).

 1) Work to update Icehouse to include the keystonemiddleware package for 
 the next stable release. Sometime after this stable release remove the 
 auth_token (and other middlewares) from keystoneclient. The biggest 
 downside is this adds new dependencies in an old release, which is poor for 
 packaging and deployers (making sure paste-ini is updated etc).

 2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. 
 This is a better experience for our deployer base, but does not solve the 
 issues around solid testing with the auth_token middleware from 
 keystoneclient (except for the stable-icehouse devstack-gate jobs).

 I am looking for insight, preferences, and other options from the community 
 and the TC. I will propose this topic for the next TC meeting so that we 
 can have a clear view on how to handle this in the most appropriate way 
 that imparts the best balance between maintainability, security, and 
 experience for the OpenStack providers, deployers, and users.

 So, ignoring the code a bit for a second, what are the interfaces which
 are exposed that we're going to run into a breaking change here?

  -Sean

 
 
 There are some configuration options provided by auth_token middleware and 
 the paste-ini files load keystoneclient.middleware.auth_token to handle 
 validation of Tokens then converting the token data to an auth_context passed 
 down to the service.
 
 There are no external (should be no) interfaces beyond the __call__ of the 
 middleware and options themselves.

Ok, so ... if this isn't out on the network, is the only reason this is
an issue is that keystoneclient version is unbounded in the stable branches?

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.utils 1.2.1 released

2015-01-08 Thread Ben Nemec
The Oslo team is pleased to announce the release of
oslo.utils 1.2.1: Oslo Utility library

This is a bugfix release to address a problem found in the 1.2.0 release.

For more details, please see the git log history below and
 http://launchpad.net/oslo/+milestone/1.2.1

Please report issues through launchpad:
 http://bugs.launchpad.net/oslo



Changes in openstack/oslo.utils  1.2.0..1.2.1

208988b Return LOCALHOST if no default interface

  diffstat (except docs and test files):

 oslo_utils/netutils.py| 2 ++
 2 files changed, 11 insertions(+)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Driver modes, share-servers, and clustered backends

2015-01-08 Thread Ben Swartzlander
There has been some confusion on the topic of driver modes and 
share-server, especially as they related to storage controllers with 
multiple physical nodes, so I will try to clear up the confusion as much 
as I can.


Manila has had the concept of share-servers since late icehouse. This 
feature was added to solve 3 problems:
1) Multiple drivers were creating storage VMs / service VMs as a side 
effect of share creation and Manila didn't offer any way to manage or 
even know about these VMs that were created.

2) Drivers needed a way to keep track of (persist) what VMs they had created
3) We wanted to standardize across drivers what these VMs looked like to 
Manila so that the scheduler and share-manager could know about them


It's important to recognize that from Manila's perspective, all a 
share-server is is a container for shares that's tied to a share network 
and it also has some network allocations. It's also important to know 
that each share-server can have zero, one, or multiple IP addresses and 
can exist on an arbitrary large number of physical nodes, and the actual 
form that a share-server takes is completely undefined.


During Juno, drivers that didn't explicity support the concept of 
share-servers basically got a dummy share server created which acted as 
a giant container for all the shares that backend created. This worked 
okay, but it was informal and not documented, and it made some of the 
things we want to do in kilo impossible.


To solve the above problem I proposed driver modes. Initially I proposed 
3 modes:

1) single_svm
2) flat_multi_svm
3) managed_multi_svm

Mode (1) was supposed to correspond to driver that didn't deal with 
share servers, and modes (2) and (3) were for drivers that did deal with 
share servers, where the difference between those 2 modes came down to 
networking details. We realized that (2) can be implemented as a special 
case of (3) so we collapsed the modes down to 2 and that's what's merged 
upstream now.


The specific names we settled on (single_svm and multi_svm) were perhaps 
poorly chosen, because svm is not a term we've used officially 
(unofficially we do talk about storage VMs and service VMs and the svm 
term captured both concepts nicely) and as some have pointed out, even 
multi and single aren't completely accurate terms because what we mean 
when we say single_svm is that the driver doesn't create/destroy share 
servers, it uses something created externally.


So one thing I want everyone to understand is that you can have a 
single_svm driver which is implemented by a large cluster of storage 
controllers, and you have have a multi_svm driver which is implemented 
a single box with some form of network and service virtualization. The 
two concepts are orthagonal.


The other thing we need to decide (hopefully at our upcoming Jan 15 
meeting) is whether to change the mode names and if so what to change 
them to. I've created the following etherpad with all of the suggestions 
I've heard so far and the my feedback on each:

https://etherpad.openstack.org/p/manila-driver-modes-discussion

-Ben Swartzlander


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.serialization 1.2.0 released

2015-01-08 Thread Doug Hellmann
The Oslo team is pleased to announce the release of
oslo.serialization 1.2.0: oslo.serialization library

The primary reason for this release is to move the code
out of the oslo namespace package as part of
https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages

For more details, please see the git log history below and
 http://launchpad.net/oslo.serialization/+milestone/1.2.0

Please report issues through launchpad:
 http://bugs.launchpad.net/oslo.serialization



Changes in /home/dhellmann/repos/openstack/oslo.serialization  1.1.0..1.2.0

e8deb08 Move files out of the namespace package
437eaf8 Activate pep8 check that _ is imported
683920d Updated from global requirements
93f8876 Workflow documentation is now in infra-manual

  diffstat (except docs and test files):

 CONTRIBUTING.rst   |   7 +-
 oslo/serialization/__init__.py |  26 +++
 oslo/serialization/jsonutils.py| 224 +--
 oslo_serialization/__init__.py |   0
 oslo_serialization/jsonutils.py| 235 
 requirements.txt   |   2 +-
 setup.cfg  |   1 +
 tests/test_warning.py  |  61 +++
 tox.ini|   1 -
 12 files changed, 604 insertions(+), 230 deletions(-)

  Requirements updates:

 diff --git a/requirements.txt b/requirements.txt
 index 176ce3c..d840b54 100644
 --- a/requirements.txt
 +++ b/requirements.txt
 @@ -11 +11 @@ iso8601=0.1.9
 -oslo.utils=1.0.0   # Apache-2.0
 +oslo.utils=1.1.0   # Apache-2.0

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] packaging problem production build question

2015-01-08 Thread David Lyle
Bower is not for use in production environments. There will continue to be
two environment setup procedures, as there are today. For production,
deploy Horizon and its dependencies via system packages. For development
and testing leverage bower to pull the javascript resources, much as pip is
used today and continue to use pip for python dependencies.

For those running CI environments, remote access will likely be required
for bower to work. Although, it seems something like private-bower [1]
could be utilized to leverage a local mirror where access or network
performance are issues.

David

[1] https://www.npmjs.com/package/private-bower


On Thu, Jan 8, 2015 at 2:28 PM, Matthew Farina m...@mattfarina.com wrote:

 I've been going over the packaging problem in an effort to see how we can
 move to something better. Given the current proposal around bower I'm still
 left with a production deployment question.

 For a build environment sitting in isolation, unable to download from the
 Internet including Github, how would they be able to get all the bower
 controlled packages to create a system horizon package (e.g., rpm or deb)?

 These build environments currently use mirrors and controlled packages.
 For example, someone might have a pypi mirror with copies of the xstatic
 packages. This is tightly controlled. If bower is managing packages where,
 in theory, would it get them from for an environment like this?

 I may have missed something. If this has already been answered please
 excuse me and point me in the right direction.

 Thanks,
 Matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-08 Thread Maru Newby

 On Jan 8, 2015, at 3:54 PM, Sean Dague s...@dague.net wrote:
 
 On 01/08/2015 06:41 PM, Maru Newby wrote:
 As per a recent exchange on #openstack-neutron, I’ve been asked to present 
 my views on this effort.  What follows is in no way intended to detract from 
 the hard work and dedication of those undertaking it, but I think that their 
 energy could be better spent.
 
 At nova’s juno mid-cycle in July, there was a discussion about deprecating 
 nova-network.  Most of the work-items on the TC’s gap analysis [1] had been 
 covered off, with one notable exception: Gap 6, the requirement to provide a 
 migration plan between nova-network and neutron, had stalled over questions 
 of implementation strategy.
 
 In my recollection of the conversation that followed, broad consensus was 
 reached that the costs of automating a reliable and fault-tolerant migration 
 strategy would be  considerable.  The technical complexity of targeting a 
 fixed deployment scenario would be challenging enough, and targeting 
 heterogenous scenarios would magnify that complexity many-fold.  Given the 
 cost and high risks associated with implementing an automated solution, 
 everyone seemed to agree that it was not worth pursuing.  Our understanding 
 was that not pursuing an automated solution could still be in keeping with 
 the TC’s requirements for deprecation, which required that a migration plan 
 be formulated but not that it be automated.  Documentation was deemed 
 sufficient, and that was to be the path forward in covering Gap 6.  The 
 documentation would allow deployers and operators to devise migration 
 strategies to suit their individual requirements.
 
 Then, when the Kilo summit schedule was announced, there was a session 
 scheduled in the nova track for discussing how to implement an automated 
 migration.  I only managed to catch the tail end of the session, but the 
 etherpad [2] makes no mention of the rationale for requiring an automated 
 migration in the first place.  It was like the discussion at the mid-cycle, 
 and all the talk of the risks outweighing the potential benefits of such an 
 effort, had simply not occurred.
 
 So, in the interests of a full and open discussion on this matter, can 
 someone please explain to me why the risks discussed at the mid-cycle were 
 suddenly deemed justifiable, seemingly against all technical rationale?  
 Criticism has been leveled at the neutron project for our supposed inaction 
 in implementing an automated solution, and I don’t think I’m the only one 
 who is concerned that this is an unreasonable requirement imposed without 
 due consideration to the risks involved.  Yes, most of us want to see 
 nova-network deprecated, but why is the lack of migration automation 
 blocking that?  An automated migration was not a requirement in the TC’s 
 original assessment of the preconditions for deprecation.  I think that if 
 neutron is deemed to be of sufficiently high quality that it can serve as an 
 effective replacement for nova-network, and we can document a migration plan 
 between them, then deprecation should proceed.
 
 
 Maru
 
 The crux of it comes from the fact that the operator voice (especially
 those folks with large nova-network deploys) wasn't represented there.
 Once we got back from the mid-cycle and brought it to the list, there
 was some very understandable push back on deprecating without a
 migration plan.

I think it’s clear that a migration plan is required.  An automated migration, 
not so much.

 
 I believe we landed at the need for the common case, flat multi host
 networking, to be migrated to something equivalent in neutron land
 (dvr?). And it needs to be something that Metacloud and CERN can get
 behind, as they represent 2 very large nova-network deploys (and have
 reasonably well defined down time allowances for this).
 
 This doesn't have to be automation for all cases, but we need to support
 a happy path from one to the other that's repeatable, reasonably
 automatic (as much as possible), and provides minimum down time for
 guests running on the environment.

The fact that operators running nova-network would like the upstream community 
to pay for implementing an automated migration solution for them is hardly 
surprising.  It is less clear to me that implementing such a solution, with all 
the attendant cost and risks, should take priority over efforts that benefit a 
broader swath of the community.  Are the operators in question so strapped for 
resources that they are not able to automate their migrations themselves, 
provided a sufficiently detailed plan to do so?


Maru  





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Mid-Cycle Meetup Planning

2015-01-08 Thread Adrian Otto
Team,

If you have been watching the Magnum project you know that things have really 
taken off recently. At Paris we did not contemplate a Mid-Cycle meet-up but now 
that we have come this far so quickly, and have such a broad base of 
participation now, it makes sense to ask if you would like to attend a 
face-to-face mid-cycle meetup. I propose the following for your consideration:

- Two full days to allow for discussion of Magnum architecture, and 
implementation of our use cases.
- Located in San Francisco.
- Open to using Los Angeles or another west coast city to drive down travel 
expenses, if that is a concern that may materially impact participation.
- Dates: February 23+24 or 25+26

If you think you can attend (with 80+% certainty) please indicate your 
availability on the proposed dates using this poll:

http://doodle.com/ddgsptuex5u3394m

Please also add a comment on the Doodle Poll indicating what Country/US City 
you will be traveling FROM in order to attend.

I will tabulate the responses, and follow up to this thread. Feel free to 
respond to this thread to discuss your thoughts about if we should meet, or if 
there are other locations or times that we should consider.

Thanks,

Adrian

PS: I do recognize that some of our contributors reside in countries that 
require Visas to travel to the US, and those take a long time to acquire. The 
reverse is also true. For those of you who can not attend in person, we will 
explore options for remote participation using teleconferencing technology, 
IRC, Etherpad, etc for limited portions of the agenda.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] packaging problem production build question

2015-01-08 Thread Jeremy Stanley
On 2015-01-08 15:11:24 -0700 (-0700), David Lyle wrote:
[...]
 For those running CI environments, remote access will likely be
 required for bower to work. Although, it seems something like
 private-bower [1] could be utilized to leverage a local mirror
 where access or network performance are issues.
[...]

There's a very good chance we'll want to do something similar for
the official OpenStack CI jobs as well. We already go to extreme
lengths to pre-cache and locally mirror things which software would
otherwise try to retrieve from random parts of the Internet during
setup for tests. If your software retrieves files from 10 random
places over the network, the chances of your job failing because of
one of them being offline is multiplied by 10. As that number grows,
so grows your lack of testability.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-08 Thread Richard Jones
Thanks, Radomir. How much detail from this discussion should be captured in
the blueprint? I'm afraid I'm more familiar with the Python PEP process.

On Thu Jan 08 2015 at 11:38:57 PM Radomir Dopieralski 
openst...@sheep.art.pl wrote:

 On 06/01/15 01:53, Richard Jones wrote:
  I think the only outstanding question is how developers and
  non-packagers populate the bower_components directory - that is, how is
  bower expected to be available for them?
 
  I think following the Storyboard approach is a good idea: isolate a
  known-working node/bower environment local to horizon which is managed
  by tox - so to invoke bower you run tox -e bower command. No worries
  about system installation or compatibility, and works in the gate.
 
  Horizon installation (whenever a pip install would be invoked) would
  then also have a tox -e bower install invocation.
 
  Storyboard[1] uses a thing called nodeenv[2] which is installed through
  pip / requirements.txt to control the node environment. It then has
  bower commands in tox.ini[3] (though I'd just have a single bower
  environment to implement the tox command I suggest above.
 
 
   Richard
 
  [1] https://wiki.openstack.org/wiki/StoryBoard
  [2] https://pypi.python.org/pypi/nodeenv
  [3] https://git.openstack.org/cgit/openstack-infra/
 storyboard-webclient/tree/tox.ini
 

 I created a blueprint for this.
 https://blueprints.launchpad.net/horizon/+spec/static-file-bower
 --
 Radomir Dopieralski


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] packaging problem production build question

2015-01-08 Thread Matthew Farina
Thanks for humoring me as I ask these questions. I'm just trying to connect
the dots.

How would system packages work in practice? For example, when it comes to
ubuntu lucid (10.04 LTS) there is no system package meeting the jQuery
requirement and for precise (12.04 LTS) you need precise-backports. This is
for the most popular JavaScript library. There is only an angular package
for trusty (14.04 LTS) and the version is older than the horizon minimum.

private-bower would be a nice way to have a private registry. But, bower
packages aren't packages in the same sense as system or pypi packages. If I
understand it correctly, when bower downloads something it doesn't get it
from the registry (bower.io or private-bower). Instead it goes to the
source (e.g., Github) to download the code. private-bower isn't a package
mirror but instead a private registry (of location). How could
private-bower be used to negate network effects if you still need to go out
to the Internet to get the packages?


On Thu, Jan 8, 2015 at 5:11 PM, David Lyle dkly...@gmail.com wrote:

 Bower is not for use in production environments. There will continue to be
 two environment setup procedures, as there are today. For production,
 deploy Horizon and its dependencies via system packages. For development
 and testing leverage bower to pull the javascript resources, much as pip is
 used today and continue to use pip for python dependencies.

 For those running CI environments, remote access will likely be required
 for bower to work. Although, it seems something like private-bower [1]
 could be utilized to leverage a local mirror where access or network
 performance are issues.

 David

 [1] https://www.npmjs.com/package/private-bower


 On Thu, Jan 8, 2015 at 2:28 PM, Matthew Farina m...@mattfarina.com
 wrote:

 I've been going over the packaging problem in an effort to see how we can
 move to something better. Given the current proposal around bower I'm still
 left with a production deployment question.

 For a build environment sitting in isolation, unable to download from the
 Internet including Github, how would they be able to get all the bower
 controlled packages to create a system horizon package (e.g., rpm or deb)?

 These build environments currently use mirrors and controlled packages.
 For example, someone might have a pypi mirror with copies of the xstatic
 packages. This is tightly controlled. If bower is managing packages where,
 in theory, would it get them from for an environment like this?

 I may have missed something. If this has already been answered please
 excuse me and point me in the right direction.

 Thanks,
 Matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-08 Thread Carl Baldwin
I added a link to @Jack's post to the ML to the bug report [1].  I am
willing to support @Itsuro with reviews of the implementation and am
willing to consult if you need and would like to ping me.

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1408488

On Thu, Jan 8, 2015 at 7:49 AM, McCann, Jack jack.mcc...@hp.com wrote:
 +1 on need for this feature

 The way I've thought about this is we need a mode that stops the *automatic*
 scheduling of routers/dhcp-servers to specific hosts/agents, while allowing
 manual assignment of routers/dhcp-servers to those hosts/agents, and where
 any existing routers/dhcp-servers on those hosts continue to operate as 
 normal.

 The maintenance use case was mentioned: I want to evacuate 
 routers/dhcp-servers
 from a host before taking it down, and having the scheduler add new 
 routers/dhcp
 while I'm evacuating the node is a) an annoyance, and b) causes a service blip
 when I have to right away move that new router/dhcp to another host.

 The other use case is adding a new host/agent into an existing environment.
 I want to be able to bring the new host/agent up and into the neutron config, 
 but
 I don't want any of my customers' routers/dhcp-servers scheduled there until 
 I've
 had a chance to assign some test routers/dhcp-servers and make sure the new 
 server
 is properly configured and fully operational.

 - Jack

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Kevin Benton
Thanks for the insight.

On Thu, Jan 8, 2015 at 3:41 AM, Miguel Ángel Ajo majop...@redhat.com
wrote:

 Correct, that’s the problem, what Kevin said should be the ideal case, but
 distros have
 proven to fail satisfying this kind of requirements earlier.

 So at least a warning to the user may be good to have IMHO.

 Miguel Ángel Ajo

 On Thursday, 8 de January de 2015 at 12:36, Ihar Hrachyshka wrote:

  The problem is probably due to the fact that some operators may run
 neutron from git and manage their dependencies in some other way; or
 distributions may suck sometimes, so packagers may miss the release note
 and fail to upgrade dnsmasq; or distributions may have their specific
 concerns on upgrading dnsmasq version, and would just backport the needed
 fix to their 'claimed to 2.66' dnsmasq (common story in Red Hat world).

 On 01/08/2015 05:25 AM, Kevin Benton wrote:

 If the new requirement is expressed in the neutron packages for the
 distro, wouldn't it be transparent to the operators?

 On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com wrote:

  On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:

 Hi all,

 I've found out that dnsmasq  2.67 does not work properly for IPv6 clients
 when it comes to MAC address matching (it fails to match, and so clients
 get 'no addresses available' response). I've requested version bump to 2.67
 in: https://review.openstack.org/145482

  Good catch, thanks for finding this Ihar!


 Now, since we've already released Juno with IPv6 DHCP stateful support,
 and DHCP agent still has minimal version set to 2.63 there, we have a
 dilemma on how to manage it from stable perspective.

 Obviously, we should communicate the revealed version dependency to
 deployers via next release notes.

 Should we also backport the minimal version bump to Juno? This will result
 in DHCP agent failing to start in case packagers don't bump dnsmasq version
 with the next Juno release. If we don't bump the version, we may leave
 deployers uninformed about the fact that their IPv6 stateful instances
 won't get any IPv6 address assigned.

 An alternative is to add a special check just for Juno that would WARN
 administrators instead of failing to start DHCP agent.

 Comments?

  Personally, I think the WARN may be the best route to go. Backporting a
 change which bumps the required dnsmasq version seems like it may be harder
 for operators to handle.

 Kyle


 /Ihar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  Kevin Benton


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Miguel Ángel Ajo
Now that I re-read the patch.
Shouldn't the version checking  need to be converted into a sanity check?

Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 12:51, Kevin Benton wrote:

 Thanks for the insight.
  
 On Thu, Jan 8, 2015 at 3:41 AM, Miguel Ángel Ajo majop...@redhat.com 
 (mailto:majop...@redhat.com) wrote:
  Correct, that’s the problem, what Kevin said should be the ideal case, but 
  distros have
  proven to fail satisfying this kind of requirements earlier.
   
  So at least a warning to the user may be good to have IMHO.  
   
  Miguel Ángel Ajo
   
   
  On Thursday, 8 de January de 2015 at 12:36, Ihar Hrachyshka wrote:
   
   The problem is probably due to the fact that some operators may run 
   neutron from git and manage their dependencies in some other way; or 
   distributions may suck sometimes, so packagers may miss the release note 
   and fail to upgrade dnsmasq; or distributions may have their specific 
   concerns on upgrading dnsmasq version, and would just backport the needed 
   fix to their 'claimed to 2.66' dnsmasq (common story in Red Hat world).

   On 01/08/2015 05:25 AM, Kevin Benton wrote:
If the new requirement is expressed in the neutron packages for the 
distro, wouldn't it be transparent to the operators?  
 
On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com 
(mailto:mest...@mestery.com) wrote:
 On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com 
 (mailto:ihrac...@redhat.com) wrote:
  Hi all,
   
  I've found out that dnsmasq  2.67 does not work properly for IPv6 
  clients when it comes to MAC address matching (it fails to match, 
  and so clients get 'no addresses available' response). I've 
  requested version bump to 2.67 in: 
  https://review.openstack.org/145482
   
 Good catch, thanks for finding this Ihar!
   
  Now, since we've already released Juno with IPv6 DHCP stateful 
  support, and DHCP agent still has minimal version set to 2.63 
  there, we have a dilemma on how to manage it from stable 
  perspective.
   
  Obviously, we should communicate the revealed version dependency to 
  deployers via next release notes.
   
  Should we also backport the minimal version bump to Juno? This will 
  result in DHCP agent failing to start in case packagers don't bump 
  dnsmasq version with the next Juno release. If we don't bump the 
  version, we may leave deployers uninformed about the fact that 
  their IPv6 stateful instances won't get any IPv6 address assigned.
   
  An alternative is to add a special check just for Juno that would 
  WARN administrators instead of failing to start DHCP agent.
   
  Comments?
   
 Personally, I think the WARN may be the best route to go. Backporting 
 a change which bumps the required dnsmasq version seems like it may 
 be harder for operators to handle.
  
 Kyle
   
  /Ihar
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org 
  (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
--  
Kevin Benton  
 
___ OpenStack-dev mailing 
list OpenStack-dev@lists.openstack.org 
(mailto:OpenStack-dev@lists.openstack.org) 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   
   
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
  
 --  
 Kevin Benton  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Ihar Hrachyshka

I agree, that is one thing we should not check in runtime.

In ideal world, it wouldn't even check version number, but capabilities. 
I's not clear though whether we can be any smarter than that (we would 
need to run dnsmasq and real dhcp clients to check actual capabilities).


/Ihar

On 01/08/2015 12:55 PM, Miguel Ángel Ajo wrote:

Now that I re-read the patch.
Shouldn't the version checking  need to be converted into a sanity check?

Miguel Ángel Ajo

On Thursday, 8 de January de 2015 at 12:51, Kevin Benton wrote:


Thanks for the insight.

On Thu, Jan 8, 2015 at 3:41 AM, Miguel Ángel Ajo majop...@redhat.com 
mailto:majop...@redhat.com wrote:
Correct, that’s the problem, what Kevin said should be the ideal 
case, but distros have

proven to fail satisfying this kind of requirements earlier.

So at least a warning to the user may be good to have IMHO.

Miguel Ángel Ajo

On Thursday, 8 de January de 2015 at 12:36, Ihar Hrachyshka wrote:

The problem is probably due to the fact that some operators may run 
neutron from git and manage their dependencies in some other way; 
or distributions may suck sometimes, so packagers may miss the 
release note and fail to upgrade dnsmasq; or distributions may have 
their specific concerns on upgrading dnsmasq version, and would 
just backport the needed fix to their 'claimed to 2.66' dnsmasq 
(common story in Red Hat world).


On 01/08/2015 05:25 AM, Kevin Benton wrote:
If the new requirement is expressed in the neutron packages for 
the distro, wouldn't it be transparent to the operators?


On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com 
mailto:mest...@mestery.com wrote:
On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka 
ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:

Hi all,

I've found out that dnsmasq  2.67 does not work properly for 
IPv6 clients when it comes to MAC address matching (it fails to 
match, and so clients get 'no addresses available' response). 
I've requested version bump to 2.67 in: 
https://review.openstack.org/145482



Good catch, thanks for finding this Ihar!

Now, since we've already released Juno with IPv6 DHCP stateful 
support, and DHCP agent still has minimal version set to 2.63 
there, we have a dilemma on how to manage it from stable 
perspective.


Obviously, we should communicate the revealed version dependency 
to deployers via next release notes.


Should we also backport the minimal version bump to Juno? This 
will result in DHCP agent failing to start in case packagers 
don't bump dnsmasq version with the next Juno release. If we 
don't bump the version, we may leave deployers uninformed about 
the fact that their IPv6 stateful instances won't get any IPv6 
address assigned.


An alternative is to add a special check just for Juno that 
would WARN administrators instead of failing to start DHCP agent.


Comments?

Personally, I think the WARN may be the best route to go. 
Backporting a change which bumps the required dnsmasq version 
seems like it may be harder for operators to handle.


Kyle


/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Ihar Hrachyshka
The problem is probably due to the fact that some operators may run 
neutron from git and manage their dependencies in some other way; or 
distributions may suck sometimes, so packagers may miss the release note 
and fail to upgrade dnsmasq; or distributions may have their specific 
concerns on upgrading dnsmasq version, and would just backport the 
needed fix to their 'claimed to 2.66' dnsmasq (common story in Red Hat 
world).


On 01/08/2015 05:25 AM, Kevin Benton wrote:
If the new requirement is expressed in the neutron packages for the 
distro, wouldn't it be transparent to the operators?


On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com 
mailto:mest...@mestery.com wrote:


On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka
ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:

Hi all,

I've found out that dnsmasq  2.67 does not work properly for
IPv6 clients when it comes to MAC address matching (it fails
to match, and so clients get 'no addresses available'
response). I've requested version bump to 2.67 in:
https://review.openstack.org/145482

Good catch, thanks for finding this Ihar!

Now, since we've already released Juno with IPv6 DHCP stateful
support, and DHCP agent still has minimal version set to 2.63
there, we have a dilemma on how to manage it from stable
perspective.

Obviously, we should communicate the revealed version
dependency to deployers via next release notes.

Should we also backport the minimal version bump to Juno? This
will result in DHCP agent failing to start in case packagers
don't bump dnsmasq version with the next Juno release. If we
don't bump the version, we may leave deployers uninformed
about the fact that their IPv6 stateful instances won't get
any IPv6 address assigned.

An alternative is to add a special check just for Juno that
would WARN administrators instead of failing to start DHCP agent.

Comments?

Personally, I think the WARN may be the best route to go.
Backporting a change which bumps the required dnsmasq version
seems like it may be harder for operators to handle.

Kyle

/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Requirements for future OpenStack projects

2015-01-08 Thread Thierry Carrez
Hi everyone,

Following the adoption by the Technical Committee of the project
structure reform specification[1], I proposed a number of initial
changes[2] on the governance repository.

[1]
http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html
[2]
https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:tag-template,n,z

I attract your attention of the 3rd change in this series[3], which
proposes the new, simpler, more objective requirements to place on
future entrants in the realm of OpenStack projects. Everyone has a
slightly different idea on the amount and nature of those, so feel free
to comment on this thread and/or the review itself.

[3] https://review.openstack.org/#/c/145740/

The current version proposed is a strawman based on the are you one of
us test that Monty originally proposed on [4] and the example list that
was included in the spec. I expect it to be heavily discussed and to
evolve a bit before it's actually voted on and adopted by the Technical
Committee.

[4] http://inaugust.com/post/108

So if you have a strong opinion on that, please join the discussion now :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Miguel Ángel Ajo
Correct, that’s the problem, what Kevin said should be the ideal case, but 
distros have
proven to fail satisfying this kind of requirements earlier.

So at least a warning to the user may be good to have IMHO.  

Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 12:36, Ihar Hrachyshka wrote:

 The problem is probably due to the fact that some operators may run neutron 
 from git and manage their dependencies in some other way; or distributions 
 may suck sometimes, so packagers may miss the release note and fail to 
 upgrade dnsmasq; or distributions may have their specific concerns on 
 upgrading dnsmasq version, and would just backport the needed fix to their 
 'claimed to 2.66' dnsmasq (common story in Red Hat world).
  
 On 01/08/2015 05:25 AM, Kevin Benton wrote:
  If the new requirement is expressed in the neutron packages for the distro, 
  wouldn't it be transparent to the operators?  
   
  On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com 
  (mailto:mest...@mestery.com) wrote:
   On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com 
   (mailto:ihrac...@redhat.com) wrote:
Hi all,
 
I've found out that dnsmasq  2.67 does not work properly for IPv6 
clients when it comes to MAC address matching (it fails to match, and 
so clients get 'no addresses available' response). I've requested 
version bump to 2.67 in: https://review.openstack.org/145482
 
   Good catch, thanks for finding this Ihar!
 
Now, since we've already released Juno with IPv6 DHCP stateful support, 
and DHCP agent still has minimal version set to 2.63 there, we have a 
dilemma on how to manage it from stable perspective.
 
Obviously, we should communicate the revealed version dependency to 
deployers via next release notes.
 
Should we also backport the minimal version bump to Juno? This will 
result in DHCP agent failing to start in case packagers don't bump 
dnsmasq version with the next Juno release. If we don't bump the 
version, we may leave deployers uninformed about the fact that their 
IPv6 stateful instances won't get any IPv6 address assigned.
 
An alternative is to add a special check just for Juno that would WARN 
administrators instead of failing to start DHCP agent.
 
Comments?
 
   Personally, I think the WARN may be the best route to go. Backporting a 
   change which bumps the required dnsmasq version seems like it may be 
   harder for operators to handle.

   Kyle
 
/Ihar
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
(mailto:OpenStack-dev@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

   
   
   
  --  
  Kevin Benton  
   
  ___ OpenStack-dev mailing list 
  OpenStack-dev@lists.openstack.org 
  (mailto:OpenStack-dev@lists.openstack.org) 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2015-01-08 Thread Anant Patil
On 08-Jan-15 16:09, Anant Patil wrote:
 On 16-Dec-14 09:41, Zane Bitter wrote:
 On 15/12/14 09:32, Anant Patil wrote:
 On 12-Dec-14 06:29, Zane Bitter wrote:
 On 11/12/14 01:14, Anant Patil wrote:
 On 04-Dec-14 10:49, Zane Bitter wrote:
 On 01/12/14 02:02, Anant Patil wrote:
 On GitHub:https://github.com/anantpatil/heat-convergence-poc

 I'm trying to review this code at the moment, and finding some stuff I
 don't understand:

 https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916

 This appears to loop through all of the resources *prior* to kicking off
 any actual updates to check if the resource will change. This is
 impossible to do in general, since a resource may obtain a property
 value from an attribute of another resource and there is no way to know
 whether an update to said other resource would cause a change in the
 attribute value.

 In addition, no attempt to catch UpdateReplace is made. Although that
 looks like a simple fix, I'm now worried about the level to which this
 code has been tested.

 We were working on new branch and as we discussed on Skype, we have
 handled all these cases. Please have a look at our current branch:
 https://github.com/anantpatil/heat-convergence-poc/tree/graph-version

 When a new resource is taken for convergence, its children are loaded
 and the resource definition is re-parsed. The frozen resource definition
 will have all the get_attr resolved.


 I'm also trying to wrap my head around how resources are cleaned up in
 dependency order. If I understand correctly, you store in the
 ResourceGraph table the dependencies between various resource names in
 the current template (presumably there could also be some left around
 from previous templates too?). For each resource name there may be a
 number of rows in the Resource table, each with an incrementing version.
 As far as I can tell though, there's nowhere that the dependency graph
 for _previous_ templates is persisted? So if the dependency order
 changes in the template we have no way of knowing the correct order to
 clean up in any more? (There's not even a mechanism to associate a
 resource version with a particular template, which might be one avenue
 by which to recover the dependencies.)

 I think this is an important case we need to be able to handle, so I
 added a scenario to my test framework to exercise it and discovered that
 my implementation was also buggy. Here's the fix:
 https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40


 Thanks for pointing this out Zane. We too had a buggy implementation for
 handling inverted dependency. I had a hard look at our algorithm where
 we were continuously merging the edges from new template into the edges
 from previous updates. It was an optimized way of traversing the graph
 in both forward and reverse order with out missing any resources. But,
 when the dependencies are inverted,  this wouldn't work.

 We have changed our algorithm. The changes in edges are noted down in
 DB, only the delta of edges from previous template is calculated and
 kept. At any given point of time, the graph table has all the edges from
 current template and delta from previous templates. Each edge has
 template ID associated with it.

 The thing is, the cleanup dependencies aren't really about the template.
 The real resources really depend on other real resources. You can't
 delete a Volume before its VolumeAttachment, not because it says so in
 the template but because it will fail if you try. The template can give
 us a rough guide in advance to what those dependencies will be, but if
 that's all we keep then we are discarding information.

 There may be multiple versions of a resource corresponding to one
 template version. Even worse, the actual dependencies of a resource
 change on a smaller time scale than an entire stack update (this is the
 reason the current implementation updates the template one resource at a
 time as we go).


 Absolutely! The edges from the template are kept only for the reference
 purposes. When we have a resource in new template, its template ID will
 also be marked to current template. At any point of time, realized
 resource will from current template, even if it were found in previous
 templates. The template ID moves for a resource if it is found.

 In theory (disclaimer: I didn't implement this yet) it can change on an 
 even smaller timescale than that. The existing plugins are something of 
 a black box to us: if a failure occurs we don't necessarily know whether 
 the real-world dependency is on the old or new version of another resource.

 
 Yes, and that's why we rollback the failed resource and its dependent
 resources to older versions, provided that, the older resources are not
 deleted unless update is done. It is easier with template Id as we know
 the previous complete template.
 
 Given that our Resource entries in the DB are in 1:1 

Re: [openstack-dev] [nova] deleting the pylint test job

2015-01-08 Thread Daniel P. Berrange
On Mon, Nov 24, 2014 at 12:52:07PM -0500, Sean Dague wrote:
 The pylint test job has been broken for weeks, no one seemed to care.
 While waiting for other tests to return today I looked into it and
 figured out the fix.
 
 However, because of nova objects pylint is progressively less and less
 useful. So the fact that no one else looked at it means that people
 didn't seem to care that it was provably broken. I think it's better
 that we just delete the jobs and save a node on every nova patch instead.
 
 Project Config Proposed here - https://review.openstack.org/#/c/136846/
 
 If you -1 that you own fixing it, and making nova objects patches
 sensible in pylint.

With the test job dead  buried, I figure we might as well kill the
code in Nova git too, since the pylint stuff currently spews errors
when 'run_tests.sh' populates a new virtualenv, even if you never
actally try to run pylint. To that end:

  https://review.openstack.org/#/c/145762/
  https://review.openstack.org/#/c/145763/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Salvatore Orlando
I think it should be possible to have a sanity check like the following:
 2.63 - sorry, that's not going to work
=2.63, 2.67 - it kind of works but ipv6 is going to be messed up
2.67 - we're all right

The runtime check on the dhcp agent is a startup check. Personally I think
agents should run sanity checks at startup, but if there are concerns
against that then let's separate runtime operations and sanity checks. In
any case the logic for the dnsmasq version checks should not be duplicated
- so if it's moved into sanity checks either the dhcp agent runs these
checks at startup, or they would be not run at all by the agent.

Salvatore

On 8 January 2015 at 13:17, Ihar Hrachyshka ihrac...@redhat.com wrote:

  I agree, that is one thing we should not check in runtime.

 In ideal world, it wouldn't even check version number, but capabilities.
 I's not clear though whether we can be any smarter than that (we would need
 to run dnsmasq and real dhcp clients to check actual capabilities).

 /Ihar


 On 01/08/2015 12:55 PM, Miguel Ángel Ajo wrote:

  Now that I re-read the patch.
 Shouldn't the version checking  need to be converted into a sanity check?

  Miguel Ángel Ajo

  On Thursday, 8 de January de 2015 at 12:51, Kevin Benton wrote:

   Thanks for the insight.

 On Thu, Jan 8, 2015 at 3:41 AM, Miguel Ángel Ajo majop...@redhat.com
 wrote:

  Correct, that’s the problem, what Kevin said should be the ideal case,
 but distros have
 proven to fail satisfying this kind of requirements earlier.

  So at least a warning to the user may be good to have IMHO.

  Miguel Ángel Ajo

On Thursday, 8 de January de 2015 at 12:36, Ihar Hrachyshka wrote:

   The problem is probably due to the fact that some operators may run
 neutron from git and manage their dependencies in some other way; or
 distributions may suck sometimes, so packagers may miss the release note
 and fail to upgrade dnsmasq; or distributions may have their specific
 concerns on upgrading dnsmasq version, and would just backport the needed
 fix to their 'claimed to 2.66' dnsmasq (common story in Red Hat world).

 On 01/08/2015 05:25 AM, Kevin Benton wrote:

  If the new requirement is expressed in the neutron packages for the
 distro, wouldn't it be transparent to the operators?

 On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com wrote:

   On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:

 Hi all,

 I've found out that dnsmasq  2.67 does not work properly for IPv6 clients
 when it comes to MAC address matching (it fails to match, and so clients
 get 'no addresses available' response). I've requested version bump to 2.67
 in: https://review.openstack.org/145482

   Good catch, thanks for finding this Ihar!


  Now, since we've already released Juno with IPv6 DHCP stateful support,
 and DHCP agent still has minimal version set to 2.63 there, we have a
 dilemma on how to manage it from stable perspective.

 Obviously, we should communicate the revealed version dependency to
 deployers via next release notes.

 Should we also backport the minimal version bump to Juno? This will result
 in DHCP agent failing to start in case packagers don't bump dnsmasq version
 with the next Juno release. If we don't bump the version, we may leave
 deployers uninformed about the fact that their IPv6 stateful instances
 won't get any IPv6 address assigned.

 An alternative is to add a special check just for Juno that would WARN
 administrators instead of failing to start DHCP agent.

 Comments?

   Personally, I think the WARN may be the best route to go. Backporting a
 change which bumps the required dnsmasq version seems like it may be harder
 for operators to handle.

 Kyle


  /Ihar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  Kevin Benton


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  Kevin Benton
   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-08 Thread Zhou, Zhenzan
Sounds like we could add something new to automate the enrollment of new 
nodes:-)
Collecting IPMI info into a csv file is still a trivial job...

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Thursday, January 8, 2015 5:19 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS RD) wrote:
 My understanding of discovery was to get all details for a node and then 
 register that node to ironic. i.e. Enrollment of the node to ironic. Pardon 
 me if it was out of line with your understanding of discovery.
That's why we agreed to use terms inspection/introspection :) sorry for not 
being consistent here (name 'discoverd' is pretty old and hard to change).

discoverd does not enroll nodes. while possible, I'm somewhat resistant to make 
it do enrolling, mostly because I want it to be user-controlled process.


 What I understand from the below mentioned spec is that the Node is 
 registered, but the spec will help ironic discover other properties of the 
 node.
that's what discoverd does currently.


 -Om

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: 07 January 2015 20:20
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

 On 01/07/2015 03:44 PM, Matt Keenan wrote:
 On 01/07/15 14:24, Kumar, Om (Cloud OS RD) wrote:
 If it's a separate project, can it be extended to perform out of band
 discovery too..? That way there will be a single service to perform
 in-band as well as out of band discoveries.. May be it could follow
 driver framework for discovering nodes, where one driver could be
 native (in-band) and other could be iLO specific etc...


 I believe the following spec outlines plans for out-of-band discovery:
 https://review.openstack.org/#/c/100951/
 Right, so Ironic will have drivers, one of which (I hope) will be a driver 
 for discoverd.


 No idea what the progress is with regard to implementation within the
 Kilo cycle though.
 For now we hope to get it merged in K.


 cheers

 Matt

 Just a thought.

 -Om

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: 07 January 2015 14:34
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

 On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:
 So is it possible to just integrate this project into ironic? I mean
 when you create an ironic node, it will start discover in the
 background. So we don't need two services?
 Well, the decision on the summit was that it's better to keep it
 separate. Please see https://review.openstack.org/#/c/135605/ for
 details on future interaction between discoverd and Ironic.

 Just a thought, thanks.

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Monday, January 5, 2015 4:49 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

 On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
 Hi, Dmitry

 I think this is a good project.
 I got one question: what is the relationship with ironic-python-agent?
 Thanks.
 Hi!

 No relationship right now, but I'm hoping to use IPA as a base for
 introspection ramdisk in the (near?) future.

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Thursday, December 11, 2014 10:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Ironic] ironic-discoverd status update

 Hi all!

 As you know I actively promote ironic-discoverd project [1] as one
 of the means to do hardware inspection for Ironic (see e.g. spec
 [2]), so I decided it's worth to give some updates to the community
 from time to time. This email is purely informative, you may safely
 skip it, if you're not interested.

 Background
 ==

 The discoverd project (I usually skip the ironic- part when
 talking about it) solves the problem of populating information
 about a node in Ironic database without help of any vendor-specific
 tool. This information usually includes Nova scheduling properties
 (CPU, RAM, disk
 size) and MAC's for ports.

 Introspection is done by booting a ramdisk on a node, collecting
 data there and posting it back to discoverd HTTP API. Thus actually
 discoverd consists of 2 components: the service [1] and the ramdisk
 [3]. The service handles 2 major tasks:
 * Processing data posted by the ramdisk, i.e. finding the node in
 Ironic database and updating node properties with new data.
 * Managing iptables so that the default PXE environment for
 introspection does not interfere with Neutron

 The project was born from a series of patches to Ironic itself
 after we discovered that this change is going to be too intrusive.
 Discoverd was actively tested as part of Instack 

Re: [openstack-dev] [TripleO] default region name

2015-01-08 Thread Zhou, Zhenzan
Thank you, Derek.
So we could also change TripleO register-endpoint/setup-endpoint to use 
RegionOne.

BR
Zhou Zhenzan
-Original Message-
From: Derek Higgins [mailto:der...@redhat.com] 
Sent: Thursday, January 8, 2015 5:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] default region name

On 08/01/15 05:21, Zhou, Zhenzan wrote:
 Hi,
 
 Does anyone know why TripleO uses regionOne as default region name? A 
 comment in the code says it's the default keystone uses. 
 But I cannot find any regionOne in keystone code. Devstack uses RegionOne 
 by default and I do see lots of RegionOne in keystone code.

Looks like this has been changing in various places
https://bugs.launchpad.net/keystone/+bug/1252299

I guess the default the code is referring to is in keystoneclient
http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/keystoneclient/v2_0/shell.py#n509



 
 stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne * 
 scripts/register-endpoint:26:REGION=regionOne # NB: This is the default 
 keystone uses.
 scripts/register-endpoint:45:echo -r, --region  -- Override the 
 default region 'regionOne'.
 scripts/setup-endpoints:33:echo -r, --region-- Override 
 the default region 'regionOne'.
 scripts/setup-endpoints:68:REGION=regionOne #NB: This is the keystone 
 default.
 stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne 
 ../tripleo-heat-templates/ 
 stack@u140401:~/openstack/tripleo-incubator$  grep -rn regionOne 
 ../tripleo-image-elements/ 
 ../tripleo-image-elements/elements/tempest/os-apply-config/opt/stack/t
 empest/etc/tempest.conf:10:region = regionOne 
 ../tripleo-image-elements/elements/neutron/os-apply-config/etc/neutron
 /metadata_agent.ini:3:auth_region = regionOne 
 stack@u140401:~/openstack/keystone$ grep -rn RegionOne * | wc -l
 130
 stack@u140401:~/openstack/keystone$ grep -rn regionOne * | wc -l
 0
 
 Another question is that TripleO doesn't export OS_REGION_NAME in 
 stackrc.  So when someone source devstack rc file to do something and then 
 source TripleO rc file again, the OS_REGION_NAME will be the one set by 
 devstack rc file.
 I know this may be strange but isn't it better to use the same default value?

We should probably add that to our various rc files, not having it there is 
probably the reason we used keystoneclients default in the first place.

 
 Thanks a lot.
 
 BR
 Zhou Zhenzan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-08 Thread Radomir Dopieralski
On 06/01/15 01:53, Richard Jones wrote:
 I think the only outstanding question is how developers and
 non-packagers populate the bower_components directory - that is, how is
 bower expected to be available for them?
 
 I think following the Storyboard approach is a good idea: isolate a
 known-working node/bower environment local to horizon which is managed
 by tox - so to invoke bower you run tox -e bower command. No worries
 about system installation or compatibility, and works in the gate.
 
 Horizon installation (whenever a pip install would be invoked) would
 then also have a tox -e bower install invocation.
 
 Storyboard[1] uses a thing called nodeenv[2] which is installed through
 pip / requirements.txt to control the node environment. It then has
 bower commands in tox.ini[3] (though I'd just have a single bower
 environment to implement the tox command I suggest above.
 
  
  Richard
 
 [1] https://wiki.openstack.org/wiki/StoryBoard
 [2] https://pypi.python.org/pypi/nodeenv
 [3] 
 https://git.openstack.org/cgit/openstack-infra/storyboard-webclient/tree/tox.ini
 

I created a blueprint for this.
https://blueprints.launchpad.net/horizon/+spec/static-file-bower
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Jan 8 1800 UTC

2015-01-08 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150108T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mirantis Openstack 5.1 environment issues

2015-01-08 Thread Pavankumar Kulkarni (CW)
Hi Fuel Team,

 Hope you are doing good.  We are facing issues in Mirantis Openstack 
5.1 environment as below;


1)  Verify network got failed with message Expected VLAN (not received) 
untagged at the interface Eth1 of controller and compute nodes.

In our set-up Eth1 is connected to the public network, which we disconnect from 
public network while doing deployment operation as FUEL itself works as DHCP 
server. We want know that is this a known issue in Fuel or from our side, as we 
followed this prerequisite before doing verify network operation.


2)  Eth1 interface in the Fuel UI is showing as down even after connecting 
back cables into the nodes.

Before doing openstack deployment  from Fuel node, we disconnected eth1 from 
controller and compute nodes as it is connected to public network.  Deployment 
was successful and then we connected back the Eth1 of all controller/compute 
nodes.  We are seeing an issue that eth1 displaying as down in FUEL UI, even 
though we connect back eth1 interface and we are able to ping to public network.


3)   Neutron dead but pid file exists.

We are seeing this issue in the controller node after restarting the neutron 
server.  We stopped all neutron related services and tried to restart 
neutron-server , but still neutron-server is not starting.


   We are sharing the logs (Diagnostic Snapshots) 
for above issues. Please find the drop-box link below.  Thanks.


Drop-box link:
https://www.dropbox.com/s/644dxj15nge5bo1/fuel-snapshot-2015-01-06_11-09-35.tgz?dl=0


Regards,
Pavan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project list (official un-official)

2015-01-08 Thread Thierry Carrez
Adam Lawson wrote:
 I've been looking for a list of projects that folks are working on. The
 official list is simple to find for those but when talking about things
 like Octavia, Libra and other non-official/non-core programs, knowing
 what people are working on would be pretty interesting.
 
 Does an exhaustive list like this exist somewhere?

The canonical list of official programs (which translates into a set of
code repositories) currently lives here:

http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml

In a larger sense, the list of all openstack/ and stackforge/ code
repositories can be found here:

http://git.openstack.org/cgit/

Hope this helps,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposing to add Lin-Hua Cheng to horizon-stable-maint

2015-01-08 Thread Thierry Carrez
Matthias Runge wrote:
 I'd like to propose to add Lin-Hua Cheng to horizon-stable-maint.
 
 Lin has been a Horizon Core for a long time and has expressed interest
 in helping out with horizon stable reviews.
 
 I think, he'll make a great addition!

We'll need the Horizon PTL to confirm the addition, then a volunteer
from stable-maint-core to introduce Lin to the stable branch policy.
Once those checkboxes are checked we can easily add Lin to
horizon-stable-maint.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] shelved_offload_time configuration

2015-01-08 Thread Kekane, Abhishek
Hi Joe,

Thanks for update.

I am working on nova-specs to improve the performance of unshelve api 
https://review.openstack.org/135387
In this spec, I am proposing not to take snapshot if shelved_offload_time is 
set to -2.

As of now the logic of creating the image is on the controller node, where I 
cannot take the decision whether to take snapshot or not as this 
shelved_offload_time is not exposed to controller node.

Please review the specs and let me know your suggestions on the same.

Thank You,

Abhishek Kekane

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 07 January 2015 22:43
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] shelved_offload_time configuration



On Mon, Dec 22, 2014 at 10:36 PM, Kekane, Abhishek 
abhishek.kek...@nttdata.commailto:abhishek.kek...@nttdata.com wrote:
Hi All,

AFAIK, for shelve api the parameter shelved_offload_time need to be configured 
on compute node.
Can we configure this parameter on controller node as well.

Not 100% sure what you are asking but hopefully this will clarify things: 
nova.conf files are read locally, so setting the value on a controller node 
doesn't affect any compute nodes.


Please suggest.

Thank You,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-08 Thread Dmitry Tantsur

On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS RD) wrote:

My understanding of discovery was to get all details for a node and then 
register that node to ironic. i.e. Enrollment of the node to ironic. Pardon me 
if it was out of line with your understanding of discovery.
That's why we agreed to use terms inspection/introspection :) sorry for 
not being consistent here (name 'discoverd' is pretty old and hard to 
change).


discoverd does not enroll nodes. while possible, I'm somewhat resistant 
to make it do enrolling, mostly because I want it to be user-controlled 
process.




What I understand from the below mentioned spec is that the Node is registered, 
but the spec will help ironic discover other properties of the node.

that's what discoverd does currently.



-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 20:20
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 03:44 PM, Matt Keenan wrote:

On 01/07/15 14:24, Kumar, Om (Cloud OS RD) wrote:

If it's a separate project, can it be extended to perform out of band
discovery too..? That way there will be a single service to perform
in-band as well as out of band discoveries.. May be it could follow
driver framework for discovering nodes, where one driver could be
native (in-band) and other could be iLO specific etc...



I believe the following spec outlines plans for out-of-band discovery:
https://review.openstack.org/#/c/100951/

Right, so Ironic will have drivers, one of which (I hope) will be a driver for 
discoverd.



No idea what the progress is with regard to implementation within the
Kilo cycle though.

For now we hope to get it merged in K.



cheers

Matt


Just a thought.

-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean
when you create an ironic node, it will start discover in the
background. So we don't need two services?

Well, the decision on the summit was that it's better to keep it
separate. Please see https://review.openstack.org/#/c/135605/ for
details on future interaction between discoverd and Ironic.


Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one
of the means to do hardware inspection for Ironic (see e.g. spec
[2]), so I decided it's worth to give some updates to the community
from time to time. This email is purely informative, you may safely
skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the ironic- part when
talking about it) solves the problem of populating information
about a node in Ironic database without help of any vendor-specific
tool. This information usually includes Nova scheduling properties
(CPU, RAM, disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting
data there and posting it back to discoverd HTTP API. Thus actually
discoverd consists of 2 components: the service [1] and the ramdisk
[3]. The service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in
Ironic database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself
after we discovered that this change is going to be too intrusive.
Discoverd was actively tested as part of Instack [4] and it's RPM
is a part of Juno RDO. After the Paris summit, we agreed on
bringing it closer to the Ironic upstream, and now discoverd is
hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties
required for scheduling, is pretty finished as of the latest stable
series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of 

Re: [openstack-dev] [TripleO] Switching CI back to amd64

2015-01-08 Thread Derek Higgins
On 07/01/15 23:41, Ben Nemec wrote:
 On 01/07/2015 11:29 AM, Clint Byrum wrote:
 Excerpts from Derek Higgins's message of 2015-01-07 02:51:41 -0800:
 Hi All,
 I intended to bring this up at this mornings meeting but the train I
 was on had no power sockets (and I had no battery) so sending to the
 list instead.

 We currently run our CI with on images built for i386, we took this
 decision a while back to save memory ( at the time is allowed us to move
 the amount of memory required in our VMs from 4G to 2G (exactly where in
 those bands the hard requirements are I don't know)

 Since then we have had to move back to 3G for the i386 VM as 2G was no
 longer enough so the saving in memory is no longer as dramatic.

 Now that the difference isn't as dramatic, I propose we switch back to
 amd64 (with 4G vms) in order to CI on what would be closer to a
 production deployment and before making the switch wanted to throw the
 idea out there for others to digest.

 This obviously would impact our capacity as we will have to reduce the
 number of testenvs per testenv hosts. Our capacity (in RH1 and roughly
 speaking) allows us to run about 1440 ci jobs per day. I believe we can
 make the switch and still keep capacity above 1200 with a few other changes
 1. Add some more testenv hosts, we have 2 unused hosts at the moment and
 we can probably take 2 of the compute nodes from the overcloud.
 2. Kill VM's at the end of each CI test (as opposed to leaving them
 running until the next CI test kills them), allowing us to more
 successfully overcommit on RAM
 3. maybe look into adding swap on the test env hosts, they don't
 currently have any, so over committing RAM is a problem the the OOM
 killer is handling from time to time (I only noticed this yesterday).

 The other benefit to doing this is that is we were to ever want to CI
 images build with packages (this has come up in previous meetings) we
 wouldn't need to provide i386 packages just for CI, while the rest of
 the world uses the amd64.

 +1 on all counts.

 It's also important to note that we should actually have a whole new
 rack of servers added to capacity soon (I think soon is about 6 months
 so far, but we are at least committed to it). So this would be, at worst,
 a temporary loss of 240 jobs per day.
 
 Actually it should be sooner than that - hp1 still isn't in the CI
 rotation yet, so once that infra change merges (the only thing
 preventing us from using it AFAIK) we'll be getting a bunch more
 capacity in the much nearer term.  Unless Derek is already counting that
 in his estimates above, of course.
Yes, this is correct, hp1 isn't in use at the moment and approximately
double those numbers.

 
 I don't feel like we've been all that capacity constrained lately
 anyway, so as I said in my other (largely unnecessary, as it turns out)
 email, I'm +1 on doing this.
Correct we're not currently constrained on capacity at all (most days we
run less then 300 jobs), but once the other region is in use we'll be
hoping to add jobs to other projects.

 
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Arne Wiebalck
Hi,

The fact that volume requests (in particular deletions) are coupled with 
certain Cinder hosts is not ideal from an operational perspective:
if the node has meanwhile disappeared, e.g. retired, the deletion gets stuck 
and can only be unblocked by changing the database. Some
people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.

From what I see, even for backends that would support it (such as Ceph), Cinder 
currently does not provide means to ensure that any of
the hosts capable of performing a volume operation would be assigned the 
request in case the original/desired one is no more available,
right?

If that is correct, how about changing the scheduling of delete operation to 
use the same logic as create operations, that is pick any of the
available hosts, rather than the one which created a volume in the first place 
(for backends where that is possible, of course)?

Thanks!
 Arne 

—
Arne Wiebalck
CERN IT
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] default region name

2015-01-08 Thread Derek Higgins
On 08/01/15 05:21, Zhou, Zhenzan wrote:
 Hi, 
 
 Does anyone know why TripleO uses regionOne as default region name? A 
 comment in the code says it's the default keystone uses. 
 But I cannot find any regionOne in keystone code. Devstack uses RegionOne 
 by default and I do see lots of RegionOne in keystone code.

Looks like this has been changing in various places
https://bugs.launchpad.net/keystone/+bug/1252299

I guess the default the code is referring to is in keystoneclient
http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/keystoneclient/v2_0/shell.py#n509



 
 stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne *
 scripts/register-endpoint:26:REGION=regionOne # NB: This is the default 
 keystone uses.
 scripts/register-endpoint:45:echo -r, --region  -- Override the 
 default region 'regionOne'.
 scripts/setup-endpoints:33:echo -r, --region-- Override 
 the default region 'regionOne'.
 scripts/setup-endpoints:68:REGION=regionOne #NB: This is the keystone 
 default.
 stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne 
 ../tripleo-heat-templates/
 stack@u140401:~/openstack/tripleo-incubator$  grep -rn regionOne 
 ../tripleo-image-elements/
 ../tripleo-image-elements/elements/tempest/os-apply-config/opt/stack/tempest/etc/tempest.conf:10:region
  = regionOne
 ../tripleo-image-elements/elements/neutron/os-apply-config/etc/neutron/metadata_agent.ini:3:auth_region
  = regionOne
 stack@u140401:~/openstack/keystone$ grep -rn RegionOne * | wc -l
 130
 stack@u140401:~/openstack/keystone$ grep -rn regionOne * | wc -l
 0
 
 Another question is that TripleO doesn't export OS_REGION_NAME in stackrc.  
 So when someone source devstack rc file 
 to do something and then source TripleO rc file again, the OS_REGION_NAME 
 will be the one set by devstack rc file. 
 I know this may be strange but isn't it better to use the same default value?

We should probably add that to our various rc files, not having it there
is probably the reason we used keystoneclients default in the first place.

 
 Thanks a lot.
 
 BR
 Zhou Zhenzan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Meeting Thursday January 8th at 22:00 UTC

2015-01-08 Thread Ken'ichi Ohmichi
Hi,

Unfortunately, I cannot join tomorrow meeting.
So I'd like to share the progress of tempest-lib RestClient
dev before the meeting.

As Paris summit consensus, we have a plan to move RestClient
from tempest to tempest-lib for moving API tests to each project
in the future. And we are cleaning the code of RestClient up in
tempest now. The progress will be complete with some patches[1].
After merging them, I will move the code to tempest-lib.

This dev requires many patches/reviews, and many people have
already worked well. Thank you very much for helping this dev,
and I appreciate continuous effort.

[1]: 
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:rest-client,n,z

Thanks
Ken Ohmichi

---

2015-01-08 2:44 GMT+09:00 David Kranz dkr...@redhat.com:
 Hi everyone,

 Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
 tomorrow Thursday, January 8th at 22:00 UTC in the #openstack-meeting
 channel.

 The agenda for tomorrow's meeting can be found here:
 https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
 Anyone is welcome to add an item to the agenda.

 It's also worth noting that a few weeks ago we started having a regular
 dedicated Devstack topic during the meetings. So if anyone is interested in
 Devstack development please join the meetings to be a part of the
 discussion.

 To help people figure out what time 22:00 UTC is in other timezones
 tomorrow's
 meeting will be at:

 17:00 EST
 07:00 JST
 08:30 ACDT
 23:00 CET
 16:00 CST
 14:00 PST

 -David Kranz


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2015-01-08 Thread Anant Patil
On 16-Dec-14 09:41, Zane Bitter wrote:
 On 15/12/14 09:32, Anant Patil wrote:
 On 12-Dec-14 06:29, Zane Bitter wrote:
 On 11/12/14 01:14, Anant Patil wrote:
 On 04-Dec-14 10:49, Zane Bitter wrote:
 On 01/12/14 02:02, Anant Patil wrote:
 On GitHub:https://github.com/anantpatil/heat-convergence-poc

 I'm trying to review this code at the moment, and finding some stuff I
 don't understand:

 https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916

 This appears to loop through all of the resources *prior* to kicking off
 any actual updates to check if the resource will change. This is
 impossible to do in general, since a resource may obtain a property
 value from an attribute of another resource and there is no way to know
 whether an update to said other resource would cause a change in the
 attribute value.

 In addition, no attempt to catch UpdateReplace is made. Although that
 looks like a simple fix, I'm now worried about the level to which this
 code has been tested.

 We were working on new branch and as we discussed on Skype, we have
 handled all these cases. Please have a look at our current branch:
 https://github.com/anantpatil/heat-convergence-poc/tree/graph-version

 When a new resource is taken for convergence, its children are loaded
 and the resource definition is re-parsed. The frozen resource definition
 will have all the get_attr resolved.


 I'm also trying to wrap my head around how resources are cleaned up in
 dependency order. If I understand correctly, you store in the
 ResourceGraph table the dependencies between various resource names in
 the current template (presumably there could also be some left around
 from previous templates too?). For each resource name there may be a
 number of rows in the Resource table, each with an incrementing version.
 As far as I can tell though, there's nowhere that the dependency graph
 for _previous_ templates is persisted? So if the dependency order
 changes in the template we have no way of knowing the correct order to
 clean up in any more? (There's not even a mechanism to associate a
 resource version with a particular template, which might be one avenue
 by which to recover the dependencies.)

 I think this is an important case we need to be able to handle, so I
 added a scenario to my test framework to exercise it and discovered that
 my implementation was also buggy. Here's the fix:
 https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40


 Thanks for pointing this out Zane. We too had a buggy implementation for
 handling inverted dependency. I had a hard look at our algorithm where
 we were continuously merging the edges from new template into the edges
 from previous updates. It was an optimized way of traversing the graph
 in both forward and reverse order with out missing any resources. But,
 when the dependencies are inverted,  this wouldn't work.

 We have changed our algorithm. The changes in edges are noted down in
 DB, only the delta of edges from previous template is calculated and
 kept. At any given point of time, the graph table has all the edges from
 current template and delta from previous templates. Each edge has
 template ID associated with it.

 The thing is, the cleanup dependencies aren't really about the template.
 The real resources really depend on other real resources. You can't
 delete a Volume before its VolumeAttachment, not because it says so in
 the template but because it will fail if you try. The template can give
 us a rough guide in advance to what those dependencies will be, but if
 that's all we keep then we are discarding information.

 There may be multiple versions of a resource corresponding to one
 template version. Even worse, the actual dependencies of a resource
 change on a smaller time scale than an entire stack update (this is the
 reason the current implementation updates the template one resource at a
 time as we go).


 Absolutely! The edges from the template are kept only for the reference
 purposes. When we have a resource in new template, its template ID will
 also be marked to current template. At any point of time, realized
 resource will from current template, even if it were found in previous
 templates. The template ID moves for a resource if it is found.
 
 In theory (disclaimer: I didn't implement this yet) it can change on an 
 even smaller timescale than that. The existing plugins are something of 
 a black box to us: if a failure occurs we don't necessarily know whether 
 the real-world dependency is on the old or new version of another resource.
 

Yes, and that's why we rollback the failed resource and its dependent
resources to older versions, provided that, the older resources are not
deleted unless update is done. It is easier with template Id as we know
the previous complete template.

 Given that our Resource entries in the DB are in 1:1 correspondence with
 actual resources (we create a 

Re: [openstack-dev] Mirantis Openstack 5.1 environment issues

2015-01-08 Thread Dmitriy Shulyak
 1)  Verify network got failed with message Expected VLAN (not
 received) untagged at the interface Eth1 of controller and compute nodes.

 In our set-up Eth1 is connected to the public network, which we disconnect
 from public network while doing deployment operation as FUEL itself works
 as DHCP server. We want know that is this a known issue in Fuel or from our
 side, as we followed this prerequisite before doing verify network
 operation.

 The fact of error is correct - no received traffic on eth1. But what is
expected behaviour from your point of view?

 2)  Eth1 interface in the Fuel UI is showing as down even after
 connecting back cables into the nodes.

 Before doing openstack deployment  from Fuel node, we disconnected eth1
 from controller and compute nodes as it is connected to public network.
 Deployment was successful and then we connected back the Eth1 of all
 controller/compute nodes.  We are seeing an issue that eth1 displaying as
 down in FUEL UI, even though we connect back eth1 interface and we are able
 to ping to public network.

Probably we disabled interface information update, after node is deployed,
and imho we need to open bug for this issue
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] default region name

2015-01-08 Thread Steven Dake

On 01/07/2015 10:21 PM, Zhou, Zhenzan wrote:

Hi,

Does anyone know why TripleO uses regionOne as default region name? A comment 
in the code says it's the default keystone uses.


Zhenzan,

I was going to point you here:

https://bugs.launchpad.net/keystone/+bug/1400589

But I see you already had commented in that bug.

I had submitted a review request to improve the documentation for Ironic 
here:


https://review.openstack.org/#/c/139842/

As to why the keystone client uses regionOne as a default, the 
conclusion of the above review request essentially indicates Keystone is 
broken in some way.  Note I didn't root cause 1400589 so the conclusion 
of the review 139842 could be incorrect.


Regards
-steve


But I cannot find any regionOne in keystone code. Devstack uses RegionOne by default 
and I do see lots of RegionOne in keystone code.

stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne *
scripts/register-endpoint:26:REGION=regionOne # NB: This is the default 
keystone uses.
scripts/register-endpoint:45:echo -r, --region  -- Override the default 
region 'regionOne'.
scripts/setup-endpoints:33:echo -r, --region-- Override the 
default region 'regionOne'.
scripts/setup-endpoints:68:REGION=regionOne #NB: This is the keystone default.
stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne 
../tripleo-heat-templates/
stack@u140401:~/openstack/tripleo-incubator$  grep -rn regionOne 
../tripleo-image-elements/
./tripleo-image-elements/elements/tempest/os-apply-config/opt/stack/tempest/etc/tempest.conf:10:region
 = regionOne
./tripleo-image-elements/elements/neutron/os-apply-config/etc/neutron/metadata_agent.ini:3:auth_region
 = regionOne
stack@u140401:~/openstack/keystone$ grep -rn RegionOne * | wc -l
130
stack@u140401:~/openstack/keystone$ grep -rn regionOne * | wc -l
0

Another question is that TripleO doesn't export OS_REGION_NAME in stackrc.  So 
when someone source devstack rc file
to do something and then source TripleO rc file again, the OS_REGION_NAME will 
be the one set by devstack rc file.
I know this may be strange but isn't it better to use the same default value?

Thanks a lot.

BR
Zhou Zhenzan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

2015-01-08 Thread Jakub Libosvar
On 12/24/2014 10:07 AM, Oleg Bondarev wrote:
 
 
 On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno ante...@anteaya.info
 mailto:ante...@anteaya.info wrote:
 
 On 12/22/2014 01:32 PM, Joe Gordon wrote:
  On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com
 mailto:mest...@mestery.com wrote:
 
  On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno
 ante...@anteaya.info mailto:ante...@anteaya.info wrote:
 
  Rather than waste your time making excuses let me state where we
 are and
  where I would like to get to, also sharing my thoughts about how
 you can
  get involved if you want to see this happen as badly as I have
 been told
  you do.
 
  Where we are:
  * a great deal of foundation work has been accomplished to
 achieve
  parity with nova-network and neutron to the extent that those
 involved
  are ready for migration plans to be formulated and be put in place
  * a summit session happened with notes and intentions[0]
  * people took responsibility and promptly got swamped with other
  responsibilities
  * spec deadlines arose and in neutron's case have passed
  * currently a neutron spec [1] is a work in progress (and it
 needs
  significant work still) and a nova spec is required and doesn't
 have a
  first draft or a champion
 
  Where I would like to get to:
  * I need people in addition to Oleg Bondarev to be available
 to help
  come up with ideas and words to describe them to create the
 specs in a
  very short amount of time (Oleg is doing great work and is a
 fabulous
  person, yay Oleg, he just can't do this alone)
  * specifically I need a contact on the nova side of this complex
  problem, similar to Oleg on the neutron side
  * we need to have a way for people involved with this effort
 to find
  each other, talk to each other and track progress
  * we need to have representation at both nova and neutron weekly
  meetings to communicate status and needs
 
  We are at K-2 and our current status is insufficient to expect
 this work
  will be accomplished by the end of K-3. I will be championing
 this work,
  in whatever state, so at least it doesn't fall off the map. If
 you would
  like to help this effort please get in contact. I will be
 thinking of
  ways to further this work and will be communicating to those who
  identify as affected by these decisions in the most effective
 methods of
  which I am capable.
 
  Thank you to all who have gotten us as far as well have gotten
 in this
  effort, it has been a long haul and you have all done great
 work. Let's
  keep going and finish this.
 
  Thank you,
  Anita.
 
  Thank you for volunteering to drive this effort Anita, I am very
 happy
  about this. I support you 100%.
 
  I'd like to point out that we really need a point of contact on
 the nova
  side, similar to Oleg on the Neutron side. IMHO, this is step 1
 here to
  continue moving this forward.
 
 
  At the summit the nova team marked the nova-network to neutron
 migration as
  a priority [0], so we are collectively interested in seeing this
 happen and
  want to help in any way possible.   With regard to a nova point of
 contact,
  anyone in nova-specs-core should work, that way we can cover more time
  zones.
 
  From what I can gather the first step is to finish fleshing out
 the first
  spec [1], and it sounds like it would be good to get a few nova-cores
  reviewing it as well.
 
 
 
 
  [0]
 
 
 http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
  [1] https://review.openstack.org/#/c/142456/
 
 
 Wonderful, thank you for the support Joe.
 
 It appears that we need to have a regular weekly meeting to track
 progress in an archived manner.
 
 I know there was one meeting November but I don't know what it was
 called so so far I can't find the logs for that.
 
 
 It wasn't official, we just gathered together on #novamigration.
 Attaching the log here.
 
 
 So if those affected by this issue can identify what time (UTC please,
 don't tell me what time zone you are in it is too hard to guess what UTC
 time you are available) and day of the week you are available for a
 meeting I'll create one and we can start talking to each other.
 
 I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and
 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100
 UTC.
 
 
 I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also acceptable.
 
 Thanks,
 Oleg

Hi all,
I'm quite flexible, any business day 0800-2300 UTC with several
exceptions is 

Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Ihar Hrachyshka

On 01/07/2015 03:21 PM, Ihar Hrachyshka wrote:

Hi all,

I've found out that dnsmasq  2.67 does not work properly for IPv6 
clients when it comes to MAC address matching (it fails to match, and 
so clients get 'no addresses available' response). I've requested 
version bump to 2.67 in: https://review.openstack.org/145482


Now, since we've already released Juno with IPv6 DHCP stateful 
support, and DHCP agent still has minimal version set to 2.63 there, 
we have a dilemma on how to manage it from stable perspective.


Obviously, we should communicate the revealed version dependency to 
deployers via next release notes.


Should we also backport the minimal version bump to Juno? This will 
result in DHCP agent failing to start in case packagers don't bump 
dnsmasq version with the next Juno release. If we don't bump the 
version, we may leave deployers uninformed about the fact that their 
IPv6 stateful instances won't get any IPv6 address assigned.


An alternative is to add a special check just for Juno that would WARN 
administrators instead of failing to start DHCP agent.


Sent Juno fix for review: https://review.openstack.org/145784



Comments?

/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Duncan Thomas
The problem is that the scheduler doesn't currently have enough info to
know which backends are 'equivalent' and which aren't. e.g. If you have 2
ceph clusters as cinder backends, they are indistinguishable from each
other.

On 8 January 2015 at 12:14, Arne Wiebalck arne.wieba...@cern.ch wrote:

 Hi,

 The fact that volume requests (in particular deletions) are coupled with
 certain Cinder hosts is not ideal from an operational perspective:
 if the node has meanwhile disappeared, e.g. retired, the deletion gets
 stuck and can only be unblocked by changing the database. Some
 people apparently use the ‘host’ option in cinder.conf to make the hosts
 indistinguishable, but this creates problems in other places.

 From what I see, even for backends that would support it (such as Ceph),
 Cinder currently does not provide means to ensure that any of
 the hosts capable of performing a volume operation would be assigned the
 request in case the original/desired one is no more available,
 right?

 If that is correct, how about changing the scheduling of delete operation
 to use the same logic as create operations, that is pick any of the
 available hosts, rather than the one which created a volume in the first
 place (for backends where that is possible, of course)?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Jordan Pittier
Arne,

I imagine this has an
impact on things using the services table, such as “cinder-manage” (how
does your “cinder-manage service list” output look like? :-)
It has indeed. I have 3 cinder-volume services, but only one line output in
 “cinder-manage service list”. But it's a minor inconvenience to me.

Duncan,
There are races, e.g. do snapshot and delete at the same time, backup and
delete at the same time, etc. The race windows are pretty tight on ceph but
they are there. It is worse on some other backends
Okay, never ran into those, yet ! I cross fingers :p

Thanks, and sorry if I hijacked this thread a little.
Jordan

On Thu, Jan 8, 2015 at 5:30 PM, Arne Wiebalck arne.wieba...@cern.ch wrote:

  Hi Jordan,

  As Duncan pointed out there may be issues if you have multiple backends
 and indistinguishable nodes (which you could  probably avoid by separating
 the hosts per backend and use different “host” flags for each set).

  But also if you have only one backend: the “host flag will enter the
 ‘services'
 table and render the host column more or less useless. I imagine this has
 an
 impact on things using the services table, such as “cinder-manage” (how
 does your “cinder-manage service list” output look like? :-), and it may
 make it
 harder to tell if the individual services are doing OK, or to control them.

  I haven’t run Cinder with identical “host” flags in production, but I
 imagine
 there may be other areas which are not happy about indistinguishable hosts.

  Arne


  On 08 Jan 2015, at 16:50, Jordan Pittier jordan.pitt...@scality.com
 wrote:

  Hi,
 Some people apparently use the ‘host’ option in cinder.conf to make the
 hosts indistinguishable, but this creates problems in other places.
 I use shared storage mounted on several cinder-volume nodes, with host
 flag set the same everywhere. Never ran into problems so far. Could you
 elaborate on this creates problems in other places please ?

  Thanks !
 Jordan

 On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck arne.wieba...@cern.ch
 wrote:

  Hmm. Not sure how widespread installations with multiple Ceph backends
 are where the
 Cinder hosts have access to only one of the backends (which is what you
 assume, right?)
 But, yes, if the volume type names are also the same (is that also needed
 for this to be a
 problem?), this will be an issue ...

  So, how about providing the information the scheduler does not have by
 introducing an
 additional tag to identify ‘equivalent’ backends, similar to the way some
 people already
 use the ‘host’ option?

  Thanks!
   Arne


  On 08 Jan 2015, at 15:11, Duncan Thomas duncan.tho...@gmail.com wrote:

  The problem is that the scheduler doesn't currently have enough info to
 know which backends are 'equivalent' and which aren't. e.g. If you have 2
 ceph clusters as cinder backends, they are indistinguishable from each
 other.

 On 8 January 2015 at 12:14, Arne Wiebalck arne.wieba...@cern.ch wrote:

 Hi,

 The fact that volume requests (in particular deletions) are coupled with
 certain Cinder hosts is not ideal from an operational perspective:
 if the node has meanwhile disappeared, e.g. retired, the deletion gets
 stuck and can only be unblocked by changing the database. Some
 people apparently use the ‘host’ option in cinder.conf to make the hosts
 indistinguishable, but this creates problems in other places.

 From what I see, even for backends that would support it (such as Ceph),
 Cinder currently does not provide means to ensure that any of
 the hosts capable of performing a volume operation would be assigned the
 request in case the original/desired one is no more available,
 right?

 If that is correct, how about changing the scheduling of delete
 operation to use the same logic as create operations, that is pick any of
 the
 available hosts, rather than the one which created a volume in the first
 place (for backends where that is possible, of course)?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

[openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-08 Thread Steven Hardy
Hi all,

I'm trying to test a fedora-software-config image with some updated
components.  I need:

- Install latest master os-apply-config (the commit I want isn't released)
- Install os-refresh-config fork from https://review.openstack.org/#/c/145764

I can't even get the o-a-c from master part working:

export PATH=${PWD}/dib-utils/bin:$PATH
export
ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements
export DIB_INSTALLTYPE_os_apply_config=source

diskimage-builder/bin/disk-image-create vm fedora selinux-permissive \
  os-collect-config os-refresh-config os-apply-config \
  heat-config-ansible \
  heat-config-cfn-init \
  heat-config-docker \
  heat-config-puppet \
  heat-config-salt \
  heat-config-script \
  ntp \
  -o fedora-software-config.qcow2

This is what I'm doing, both tools end up as pip installed versions AFAICS,
so I've had to resort to manually hacking the image post-DiB using
virt-copy-in.

Pretty sure there's a way to make DiB do this, but don't know what, anyone
able to share some clues?  Do I have to hack the elements, or is there a
better way?

The docs are pretty sparse, so any help would be much appreciated! :)

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-08 Thread Chris Jones
Hi

 On 8 Jan 2015, at 17:37, Steven Hardy sha...@redhat.com wrote:
 Pretty sure there's a way to make DiB do this, but don't know what, anyone
 able to share some clues?  Do I have to hack the elements, or is there a
 better way?
 
 The docs are pretty sparse, so any help would be much appreciated! :)


We do have a mechanism for overriding the git sources for things, but 
os-*-config don't use them at the moment, they either install from packages or 
pip. I'm not sure what the rationale was for not including a git source for 
those tools, but I think we should do it, even if it's limited to situations 
where the procedure for overriding sources is being followed.

(The procedure that should be used is the DIB_REPO* environment variables 
documented in diskimage-builder/elements/source-repositories/README.md)

So, for now I think you're going to be stuck hacking the elements, 
unfortunately.

Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-08 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2015-01-08 09:37:55 -0800:
 Hi all,
 
 I'm trying to test a fedora-software-config image with some updated
 components.  I need:
 
 - Install latest master os-apply-config (the commit I want isn't released)
 - Install os-refresh-config fork from https://review.openstack.org/#/c/145764
 
 I can't even get the o-a-c from master part working:
 
 export PATH=${PWD}/dib-utils/bin:$PATH
 export
 ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements
 export DIB_INSTALLTYPE_os_apply_config=source
 
 diskimage-builder/bin/disk-image-create vm fedora selinux-permissive \
   os-collect-config os-refresh-config os-apply-config \
   heat-config-ansible \
   heat-config-cfn-init \
   heat-config-docker \
   heat-config-puppet \
   heat-config-salt \
   heat-config-script \
   ntp \
   -o fedora-software-config.qcow2
 
 This is what I'm doing, both tools end up as pip installed versions AFAICS,
 so I've had to resort to manually hacking the image post-DiB using
 virt-copy-in.
 
 Pretty sure there's a way to make DiB do this, but don't know what, anyone
 able to share some clues?  Do I have to hack the elements, or is there a
 better way?
 
 The docs are pretty sparse, so any help would be much appreciated! :)
 

Hi Steve. The os-*-config tools represent a bit of a quandry for us,
as we want to test and run with released versions, not latest git, so
the elements just install from pypi. I believe we use devpi in testing
to test new commits to the tools themselves.

So you can probably setup a devpi instance locally, and upload the
commits you want to it, and then build the image with the 'pypi' element
added and this:

PYPI_MIRROR_URL=http://localhost:3141/

See diskimage-builder/elements/pypi/README.md for more info on how to
set this up.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread Steven Hardy
On Thu, Jan 08, 2015 at 09:53:02PM +0530, vishnu wrote:
Hi Zane,
I was wondering if we could push changes relating to backup stack removal
and to not load resources as part of stack. There needs to be a capability
to restart jobs left over by dead engines.A 
something like heat stack-operation --continue [git rebase --continue]

To me, it's pointless if the user has to restart the operation, they can do
that already, e.g by triggering a stack update after a failed stack create.

The process needs to be automatic IMO, if one engine dies, another engine
should detect that it needs to steal the lock or whatever and continue
whatever was in-progress.

Had a chat with shady regarding this. IMO this would be a valuable
enhancement. Notification based lead sharing can be taken up upon
completion.

I was referring to a capability for the service to transparently recover
if, for example, a heat-engine is restarted during a service upgrade.

Currently, users will be impacted in this situation, and making them
manually restart failed operations doesn't seem like a super-great solution
to me (like I said, they can already do that to some extent)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dropping namespace packages

2015-01-08 Thread Doug Hellmann

 On Jan 8, 2015, at 11:29 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 On 01/05/2015 04:51 PM, Doug Hellmann wrote:
 As each library is released, we will send release notes to this list, as 
 usual. At that point the Oslo liaisons should start planning patches to 
 change imports in their projects from oslo.foo to “oslo_foo. The old 
 imports should still work for now, but new features will not be added to the 
 old namespace, so over time it will be necessary to make the changes anyway. 
 We are likely to remove the old namespace package completely during the next 
 release cycle, but that hasn't been decided.
 
 Making the switch probably requires us to add some hacking rule that would 
 forbid old namespace based imports, right? Do we by chance have such a rule 
 implemented anywhere?

I’m not sure that’s something we need to enforce. Liaisons should be updating 
projects now as we release libraries, and then we’ll consider whether we can 
drop the namespace packages when we plan the next cycle.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-08 Thread Gregory Haynes
Excerpts from Steven Hardy's message of 2015-01-08 17:37:55 +:
 Hi all,
 
 I'm trying to test a fedora-software-config image with some updated
 components.  I need:
 
 - Install latest master os-apply-config (the commit I want isn't released)
 - Install os-refresh-config fork from https://review.openstack.org/#/c/145764
 
 I can't even get the o-a-c from master part working:
 
 export PATH=${PWD}/dib-utils/bin:$PATH
 export
 ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements
 export DIB_INSTALLTYPE_os_apply_config=source
 
 diskimage-builder/bin/disk-image-create vm fedora selinux-permissive \
   os-collect-config os-refresh-config os-apply-config \
   heat-config-ansible \
   heat-config-cfn-init \
   heat-config-docker \
   heat-config-puppet \
   heat-config-salt \
   heat-config-script \
   ntp \
   -o fedora-software-config.qcow2
 
 This is what I'm doing, both tools end up as pip installed versions AFAICS,
 so I've had to resort to manually hacking the image post-DiB using
 virt-copy-in.
 
 Pretty sure there's a way to make DiB do this, but don't know what, anyone
 able to share some clues?  Do I have to hack the elements, or is there a
 better way?
 
 The docs are pretty sparse, so any help would be much appreciated! :)
 
 Thanks,
 
 Steve
 

Hey Steve,

source-repositories is your friend here :) (check out
dib/elements/source-repositires/README). One potential gotcha is that
because source-repositires is an element it really only applies to tools
used within images (and os-apply-config is used outside the image). To
fix this we have a shim in tripleo-incubator/scripts/pull-tools which
emulates the functionality of source-repositories.

Example usage:

* checkout os-apply-config to the ref you wish to use
* export DIB_REPOLOCATION_os_apply_config=/path/to/oac
* export DIB_REPOREF_os_refresh_config=refs/changes/64/145764/1
* start your devtesting

HTH,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

2015-01-08 Thread Anita Kuno
On 01/08/2015 08:30 AM, Jakub Libosvar wrote:
 On 12/24/2014 10:07 AM, Oleg Bondarev wrote:


 On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno ante...@anteaya.info
 mailto:ante...@anteaya.info wrote:

 On 12/22/2014 01:32 PM, Joe Gordon wrote:
  On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com
 mailto:mest...@mestery.com wrote:
 
  On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno
 ante...@anteaya.info mailto:ante...@anteaya.info wrote:
 
  Rather than waste your time making excuses let me state where we
 are and
  where I would like to get to, also sharing my thoughts about how
 you can
  get involved if you want to see this happen as badly as I have
 been told
  you do.
 
  Where we are:
  * a great deal of foundation work has been accomplished to
 achieve
  parity with nova-network and neutron to the extent that those
 involved
  are ready for migration plans to be formulated and be put in place
  * a summit session happened with notes and intentions[0]
  * people took responsibility and promptly got swamped with other
  responsibilities
  * spec deadlines arose and in neutron's case have passed
  * currently a neutron spec [1] is a work in progress (and it
 needs
  significant work still) and a nova spec is required and doesn't
 have a
  first draft or a champion
 
  Where I would like to get to:
  * I need people in addition to Oleg Bondarev to be available
 to help
  come up with ideas and words to describe them to create the
 specs in a
  very short amount of time (Oleg is doing great work and is a
 fabulous
  person, yay Oleg, he just can't do this alone)
  * specifically I need a contact on the nova side of this complex
  problem, similar to Oleg on the neutron side
  * we need to have a way for people involved with this effort
 to find
  each other, talk to each other and track progress
  * we need to have representation at both nova and neutron weekly
  meetings to communicate status and needs
 
  We are at K-2 and our current status is insufficient to expect
 this work
  will be accomplished by the end of K-3. I will be championing
 this work,
  in whatever state, so at least it doesn't fall off the map. If
 you would
  like to help this effort please get in contact. I will be
 thinking of
  ways to further this work and will be communicating to those who
  identify as affected by these decisions in the most effective
 methods of
  which I am capable.
 
  Thank you to all who have gotten us as far as well have gotten
 in this
  effort, it has been a long haul and you have all done great
 work. Let's
  keep going and finish this.
 
  Thank you,
  Anita.
 
  Thank you for volunteering to drive this effort Anita, I am very
 happy
  about this. I support you 100%.
 
  I'd like to point out that we really need a point of contact on
 the nova
  side, similar to Oleg on the Neutron side. IMHO, this is step 1
 here to
  continue moving this forward.
 
 
  At the summit the nova team marked the nova-network to neutron
 migration as
  a priority [0], so we are collectively interested in seeing this
 happen and
  want to help in any way possible.   With regard to a nova point of
 contact,
  anyone in nova-specs-core should work, that way we can cover more time
  zones.
 
  From what I can gather the first step is to finish fleshing out
 the first
  spec [1], and it sounds like it would be good to get a few nova-cores
  reviewing it as well.
 
 
 
 
  [0]
 
 
 http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
  [1] https://review.openstack.org/#/c/142456/
 
 
 Wonderful, thank you for the support Joe.

 It appears that we need to have a regular weekly meeting to track
 progress in an archived manner.

 I know there was one meeting November but I don't know what it was
 called so so far I can't find the logs for that.


 It wasn't official, we just gathered together on #novamigration.
 Attaching the log here.


 So if those affected by this issue can identify what time (UTC please,
 don't tell me what time zone you are in it is too hard to guess what UTC
 time you are available) and day of the week you are available for a
 meeting I'll create one and we can start talking to each other.

 I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and
 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100
 UTC.


 I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also acceptable.

 Thanks,
 Oleg
 
 Hi all,
 I'm quite flexible, any business day 

Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-08 Thread Gregory Haynes
Excerpts from Gregory Haynes's message of 2015-01-08 18:06:16 +:
 Excerpts from Steven Hardy's message of 2015-01-08 17:37:55 +:
  Hi all,
  
  I'm trying to test a fedora-software-config image with some updated
  components.  I need:
  
  - Install latest master os-apply-config (the commit I want isn't released)
  - Install os-refresh-config fork from 
  https://review.openstack.org/#/c/145764
  
  I can't even get the o-a-c from master part working:
  
  export PATH=${PWD}/dib-utils/bin:$PATH
  export
  ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements
  export DIB_INSTALLTYPE_os_apply_config=source
  
  diskimage-builder/bin/disk-image-create vm fedora selinux-permissive \
os-collect-config os-refresh-config os-apply-config \
heat-config-ansible \
heat-config-cfn-init \
heat-config-docker \
heat-config-puppet \
heat-config-salt \
heat-config-script \
ntp \
-o fedora-software-config.qcow2
  
  This is what I'm doing, both tools end up as pip installed versions AFAICS,
  so I've had to resort to manually hacking the image post-DiB using
  virt-copy-in.
  
  Pretty sure there's a way to make DiB do this, but don't know what, anyone
  able to share some clues?  Do I have to hack the elements, or is there a
  better way?
  
  The docs are pretty sparse, so any help would be much appreciated! :)
  
  Thanks,
  
  Steve
  
 
 Hey Steve,
 
 source-repositories is your friend here :) (check out
 dib/elements/source-repositires/README). One potential gotcha is that
 because source-repositires is an element it really only applies to tools
 used within images (and os-apply-config is used outside the image). To
 fix this we have a shim in tripleo-incubator/scripts/pull-tools which
 emulates the functionality of source-repositories.
 
 Example usage:
 
 * checkout os-apply-config to the ref you wish to use
 * export DIB_REPOLOCATION_os_apply_config=/path/to/oac
 * export DIB_REPOREF_os_refresh_config=refs/changes/64/145764/1
 * start your devtesting

Actually, Chris's response is 100% correct. Even in the source
installtype we appear to be pip installing these tools so this will not
work.

In our CI we work around this by creating a local pypi mirror and
configuring pip to fall back to an upstream mirror. We then build
sdist's for anything we want to install via git and add them to our
'overlay mirror':

Code:
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_devtest.sh#n139

Obviously, this isnt the most user friendly approach, but its an option.

Good luck,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dropping namespace packages

2015-01-08 Thread Jay Bryant
We talked about this in Cinder.  I am planning to create some hacking
checks for us just to be safe.   Shouldn't take a ton of effort.

Jay
On Jan 8, 2015 12:03 PM, Doug Hellmann d...@doughellmann.com wrote:


  On Jan 8, 2015, at 11:29 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
  On 01/05/2015 04:51 PM, Doug Hellmann wrote:
  As each library is released, we will send release notes to this list,
 as usual. At that point the Oslo liaisons should start planning patches to
 change imports in their projects from oslo.foo to “oslo_foo. The old
 imports should still work for now, but new features will not be added to
 the old namespace, so over time it will be necessary to make the changes
 anyway. We are likely to remove the old namespace package completely during
 the next release cycle, but that hasn't been decided.
 
  Making the switch probably requires us to add some hacking rule that
 would forbid old namespace based imports, right? Do we by chance have such
 a rule implemented anywhere?

 I’m not sure that’s something we need to enforce. Liaisons should be
 updating projects now as we release libraries, and then we’ll consider
 whether we can drop the namespace packages when we plan the next cycle.

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-08 Thread Chris Jones
Hi

 On 8 Jan 2015, at 17:58, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Steven Hardy's message of 2015-01-08 09:37:55 -0800:
 So you can probably setup a devpi instance locally, and upload the
 commits you want to it, and then build the image with the 'pypi' element

Given that we have a pretty good release frequency of all our tools, is this 
burden on devs/testers actually justified at this point, versus the potential 
consistency we could have with source repo flexibility in other openstack 
components?

Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for Amazon VPC APIs in OpenStack

2015-01-08 Thread Joe Gordon
On Tue, Jan 6, 2015 at 5:10 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 01/06/2015 12:44 AM, Saju M wrote:

 Hi,

 I seen a blueprint which implement Amazon VPC APIs and Status is Abandoned
 https://review.openstack.org/#/c/40071/


The amazon VPC effort has moved to
http://git.openstack.org/cgit/stackforge/ec2-api/.

This blueprint was abandoned due to dualing concepts of VPC: Amazon VPC and
a OpenStack native VPC concept. See
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/027490.html
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027490.html
for more context.




 Have any plans to get it done in Kilo release ?
 How can I change the Abandoned status ?.
 Have any dependencies ?

 Please let me know, So I can rebase it.


 Hi Saju,

 We'd need to see a nova-specs submission for this work, first. You could
 take much of the content from https://wiki.openstack.org/
 wiki/Blueprint-aws-vpc-support for the content of the nova-spec
 submission, though you will want to change references to Quantum to
 Neutron :)

 The nova-specs submission is covered here:

 https://wiki.openstack.org/wiki/Blueprints#Nova

 Note that due to it being late in the cycle, it will be unlikely to get a
 newly-submitted spec approved, but that does not mean you cannot work on
 code that implements the spec.

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dropping namespace packages

2015-01-08 Thread Ihar Hrachyshka
OK, I was going to implement something for neutron, but if you're going 
to handle it quickly till the end of the week, I'll wait to steal. ;)


On 01/08/2015 07:14 PM, Jay Bryant wrote:


We talked about this in Cinder.  I am planning to create some hacking 
checks for us just to be safe.   Shouldn't take a ton of effort.


Jay

On Jan 8, 2015 12:03 PM, Doug Hellmann d...@doughellmann.com 
mailto:d...@doughellmann.com wrote:



 On Jan 8, 2015, at 11:29 AM, Ihar Hrachyshka
ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:

 On 01/05/2015 04:51 PM, Doug Hellmann wrote:
 As each library is released, we will send release notes to this
list, as usual. At that point the Oslo liaisons should start
planning patches to change imports in their projects from
oslo.foo to “oslo_foo. The old imports should still work for
now, but new features will not be added to the old namespace, so
over time it will be necessary to make the changes anyway. We are
likely to remove the old namespace package completely during the
next release cycle, but that hasn't been decided.

 Making the switch probably requires us to add some hacking rule
that would forbid old namespace based imports, right? Do we by
chance have such a rule implemented anywhere?

I’m not sure that’s something we need to enforce. Liaisons should
be updating projects now as we release libraries, and then we’ll
consider whether we can drop the namespace packages when we plan
the next cycle.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Openstack-stable-maint] Stable check of openstack/cinder failed

2015-01-08 Thread Jay Bryant
I have a patch out to resolve this failure:
https://review.openstack.org/145642

Jay
-- Forwarded message --
From: A mailing list for the OpenStack Stable Branch test reports. 
openstack-stable-ma...@lists.openstack.org
Date: Jan 8, 2015 1:40 AM
Subject: [Openstack-stable-maint] Stable check of openstack/cinder failed
To: openstack-stable-ma...@lists.openstack.org
Cc:

Build failed.

- periodic-cinder-docs-icehouse
http://logs.openstack.org/periodic-stableperiodic-cinder-docs-icehouse/d17b6e2/
: SUCCESS in 5m 58s
- periodic-cinder-python26-icehouse
http://logs.openstack.org/periodic-stableperiodic-cinder-python26-icehouse/5b0d5e1/
: SUCCESS in 9m 58s
- periodic-cinder-python27-icehouse
http://logs.openstack.org/periodic-stableperiodic-cinder-python27-icehouse/0f277fd/
: SUCCESS in 8m 40s
- periodic-cinder-docs-juno
http://logs.openstack.org/periodic-stableperiodic-cinder-docs-juno/8447e0d/
: SUCCESS in 4m 37s
- periodic-cinder-python26-juno
http://logs.openstack.org/periodic-stableperiodic-cinder-python26-juno/2735239/
: FAILURE in 12m 11s
- periodic-cinder-python27-juno
http://logs.openstack.org/periodic-stableperiodic-cinder-python27-juno/dbe1e66/
: FAILURE in 8m 17s

___
Openstack-stable-maint mailing list
openstack-stable-ma...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dropping namespace packages

2015-01-08 Thread Ihar Hrachyshka

On 01/08/2015 07:03 PM, Doug Hellmann wrote:

I’m not sure that’s something we need to enforce. Liaisons should be updating 
projects now as we release libraries, and then we’ll consider whether we can 
drop the namespace packages when we plan the next cycle.


Without a hacking rule, there is a chance old namespace usage will sneak 
in, and then we'll need to get back to updating imports. I would rather 
avoid that and get migration committed with enforcement.


/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Flavio Percoco to stable-maint-core

2015-01-08 Thread Jay Bryant
+2. How contributions have always been helpful.
On Jan 7, 2015 8:50 AM, Alan Pevec ape...@gmail.com wrote:

 +2 Flavio knows stable branch policies very well and will be a good
 addition to the cross-projects stable team.

 Cheers,
 Alan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The scope of OpenStack wiki [all]

2015-01-08 Thread Stefano Maffulli
hello folks,

TL;DR Many wiki pages and categories are maintained elsewhere and to
avoid confusion to newcomers we need to agree on a new scope for the
wiki. A suggestion below is to limit its scope to content that doesn't
need/want peer-review and is not hosted elsewhere (no duplication).

The wiki served for many years the purpose of 'poor man CMS' when we
didn't have an easy way to collaboratively create content. So the wiki
ended up hosting pages like 'Getting started with OpenStack', demo
videos, How to contribute, mission, to document our culture / shared
understandings (4 opens, release cycle, use of blueprints, stable branch
policy...), to maintain the list of Programs, meetings/teams, blueprints
and specs, lots of random documentation and more.

Lots of the content originally placed on the wiki was there because
there was no better place. Now that we have more mature content and
processes, these are finding their way out of the wiki like: 

  * http://governance.openstack.org
  * http://specs.openstack.org
  * http://docs.openstack.org/infra/manual/

Also, the Introduction to OpenStack is maintained on
www.openstack.org/software/ together with introductory videos and other
basic material. A redesign of openstack.org/community and the new portal
groups.openstack.org are making even more wiki pages obsolete.

This makes the wiki very confusing to newcomers and more likely to host
conflicting information.

I would propose to restrict the scope of the wiki to things that
anything that don't need or want to be peer-reviewed. Things like:

  * agendas for meetings, sprints, etc
  * list of etherpads for summits
  * quick prototypes of new programs (mentors, upstream training) before
they find a stable home (which can still be the wiki)

Also, documentation for contributors and users should not be on the
wiki, but on docs.openstack.org (where it can be found more easily).

If nobody objects, I'll start by proposing a new home page design and
start tagging content that may be moved elsewhere. 

/stef


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] request spec freeze exception for Improving performance of unshelve-api

2015-01-08 Thread Kekane, Abhishek
Hi Devs,

I have submitted a nova-spec [1] for improving the performance of unshelve-api.

The aim of this feature is to improve the performance of unshelve instance by 
eliminating downloading/copying snapshot time.
All instance files will be retained in the instance store backed by shared or 
non-shared storage on the compute node when an instance is shelved.

Please review this spec, and consider this spec for request spec freeze 
exception.

[1] https://review.openstack.org/#/c/135387/

Thank You,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Using DevStack for multi-node setup

2015-01-08 Thread Kashyap Chamarthy
On Mon, Jan 05, 2015 at 08:20:48AM -0500, Sean Dague wrote:
 On 01/03/2015 04:41 PM, Danny Choi (dannchoi) wrote:
  Hi,
  
  I’m using DevStack to deploy OpenStack on a multi-node setup:
  Controller, Network, Compute as 3 separate nodes
  
  Since the Controller node is stacked first, during which the Network
  node is not yet ready, it fails to create the router instance and the
  public network.
  Both have to be created manually.
  
  Is this the expected behavior?  Is there a workaround to have DevStack
  create them?
 
 The only way folks tend to run multinode devstack is Controller +
 Compute nodes. And that sequence of creating an all in one controller,
 plus additional compute nodes later, works.

Sean, I wonder if you have a pointer to an example CI gate job (assuming
there's one) for the above with Neutron networking?


-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Request Spec Freeze Exception for: Remove direct nova DB/API access in Scheduler Filters

2015-01-08 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

This spec [1] has been on hold, as it was largely dependent on another
spec [2] being approved first, as it has very similar requirements.
Now that the latter has been approved, I am moving forward with the
former. It is part of the scheduler cleanup effort that has been
identified as a priority for Kilo.

Thanks for your consideration!

[1] https://review.openstack.org/#/c/138444/

[2]
https://github.com/openstack/nova-specs/blob/master/specs/kilo/approved/isolate-scheduler-db-aggregates.rst


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUrplRAAoJEKMgtcocwZqLYpMP/0qdi9jZIZUzsi8KmGlTVAr1
mDqbErF7n7wUQSckOqEEJ6IA4Nfz9JWNgKkbgciIpH5kREw0fxtO5Hu2bvMEF4Qi
JywC0YmLbdCySyP80q25rGF3eA6ECWTmPpoDlDqgnvzgbHc3nrb+DgG+6PKOxMv9
JU4rq6TKvY4hmEM7Fm9Utc5dq9lWqp0V7xWhgYJpFaHkYemTLqFGhfH87TPTwY89
LMB7mKTAqELOgRONPfhoq3MPpww6IM+5rHdlXUHz4bWMHjo5Hgnf/qkO5C1YE8QM
57nGXcp21xIsnaBiDv0/446YZbd91lQpmaaSkEpAqn+x1T+zI3V8iEvDL9TJcM3I
P6GBiqRFHzOnvJkMkTCxjkEb19O/zWo4kUGVamf72G7GwEi9I8k16JKOmeatTL9B
7Hbm6f1nTWJ+vT+bxIVFWs+nEM3wQXH2qEwJIj8XqpiMP+czCA8Dsmh5X5h5G+c8
DlRst4YXkUBnohUUU264R1irqZepsOyqUoAjS9FUVLEwy4BKiYbgthF9IEwdtDvd
B4quxqpquioRirpvzrzTg0DH6sjb/Zmk5TXNm4liVvRvq6a5EfTKkWGGMv+EAEaO
hmXVUGVmDbJnj/U9TPWyicl1Cpalw/MkNlTEoLIIH545JCJzi1Rja0eHTGbiiYX9
paa2ZblwmxLalMtQ6ksB
=M3cW
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-08 Thread McCann, Jack
+1 on need for this feature

The way I've thought about this is we need a mode that stops the *automatic*
scheduling of routers/dhcp-servers to specific hosts/agents, while allowing
manual assignment of routers/dhcp-servers to those hosts/agents, and where
any existing routers/dhcp-servers on those hosts continue to operate as normal.

The maintenance use case was mentioned: I want to evacuate routers/dhcp-servers
from a host before taking it down, and having the scheduler add new routers/dhcp
while I'm evacuating the node is a) an annoyance, and b) causes a service blip
when I have to right away move that new router/dhcp to another host.

The other use case is adding a new host/agent into an existing environment.
I want to be able to bring the new host/agent up and into the neutron config, 
but
I don't want any of my customers' routers/dhcp-servers scheduled there until 
I've
had a chance to assign some test routers/dhcp-servers and make sure the new 
server
is properly configured and fully operational.

- Jack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Swift object-updater and container-updater

2015-01-08 Thread Minwoo Bae
Hi, to whom it may concern:


Jay Bryant and I would like to have the fixes for the Swift object-updater 
(https://review.openstack.org/#/c/125746/) and the Swift container-updater 
(
https://review.openstack.org/#/q/I7eed122bf6b663e6e7894ace136b6f4653db4985,n,z
) backported to Juno and then to Icehouse soon if possible. It's been in 
the queue for a while now, so we were wondering if we could have an 
estimated time for delivery? 

Icehouse is in security-only mode, but the container-updater issue may 
potentially be used as a fork-bomb, which presents security concerns. To 
further justify the fix, a problem of similar nature 
https://review.openstack.org/#/c/126371/ (regarding the object-auditor) 
was successfully fixed in stable/icehouse. 

The object-updater issue may potentially have some security implications 
as well. 


Thank you very much! 

Minwoo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-08 Thread Kyle Mestery
On Thu, Jan 8, 2015 at 8:49 AM, McCann, Jack jack.mcc...@hp.com wrote:

 +1 on need for this feature

 The way I've thought about this is we need a mode that stops the
 *automatic*
 scheduling of routers/dhcp-servers to specific hosts/agents, while allowing
 manual assignment of routers/dhcp-servers to those hosts/agents, and where
 any existing routers/dhcp-servers on those hosts continue to operate as
 normal.

 The maintenance use case was mentioned: I want to evacuate
 routers/dhcp-servers
 from a host before taking it down, and having the scheduler add new
 routers/dhcp
 while I'm evacuating the node is a) an annoyance, and b) causes a service
 blip
 when I have to right away move that new router/dhcp to another host.

 The other use case is adding a new host/agent into an existing environment.
 I want to be able to bring the new host/agent up and into the neutron
 config, but
 I don't want any of my customers' routers/dhcp-servers scheduled there
 until I've
 had a chance to assign some test routers/dhcp-servers and make sure the
 new server
 is properly configured and fully operational.

 These are all solid reasons for adding this, and it makes sense to me as
well. From a deployers perspective, these would be a big win.

Given we have already filed a bug, hopefully we can get this addressed soon.

Thanks,
Kyle


 - Jack

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mirantis Openstack 5.1 environment issues

2015-01-08 Thread Mike Scherbakov
Hi Pavankumar,
this is public mailing list for development questions regarding OpenStack
and open source stackforge projects. While Mirantis OpenStack heavily
relies on Fuel, open source tool for installation and management of
OpenStack environment, this is not the right channel to discuss issues by
the policy [1].

Please address your questions to Mirantis support instead.

[1] https://wiki.openstack.org/wiki/Mailing_Lists#Future_Development

Thank you,

On Thu, Jan 8, 2015 at 4:28 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


 1)  Verify network got failed with message Expected VLAN (not
 received) untagged at the interface Eth1 of controller and compute nodes.

 In our set-up Eth1 is connected to the public network, which we
 disconnect from public network while doing deployment operation as FUEL
 itself works as DHCP server. We want know that is this a known issue in
 Fuel or from our side, as we followed this prerequisite before doing verify
 network operation.

  The fact of error is correct - no received traffic on eth1. But what is
 expected behaviour from your point of view?

 2)  Eth1 interface in the Fuel UI is showing as down even after
 connecting back cables into the nodes.

 Before doing openstack deployment  from Fuel node, we disconnected eth1
 from controller and compute nodes as it is connected to public network.
 Deployment was successful and then we connected back the Eth1 of all
 controller/compute nodes.  We are seeing an issue that eth1 displaying as
 down in FUEL UI, even though we connect back eth1 interface and we are able
 to ping to public network.

 Probably we disabled interface information update, after node is deployed,
 and imho we need to open bug for this issue

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] request spec freeze exception for Enhance iSCSI multipath/failover support

2015-01-08 Thread Tomoki Sekiyama
Hi,

I have submitted 2 nova-specs [1][2] related to Cinder volumes iSCSI
multipath/failover improvement.

These specs are both for enabling Cinder to pass multiple iSCSI paths to
Nova. [1] is for multipath use-case, where Nova will establish iSCSI
sessions to all the given paths. [2] is for failover use-case, where Nova
will try to establish alternative path when it fails to establish main
path.

These specs were blocked to wait for corresponding cinder-specs approval
[3][4], which are now approved. Please consider these specs for spec
freeze exception.

nova-specs:
[1] https://review.openstack.org/#/c/134299/
[2] https://review.openstack.org/#/c/137468/

cinder-specs:
[3] https://review.openstack.org/#/c/136500/
[4] https://review.openstack.org/#/c/131502/

Regards,
Tomoki Sekiyama


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Carl Baldwin
On Wed, Jan 7, 2015 at 9:25 PM, Kevin Benton blak...@gmail.com wrote:
 If the new requirement is expressed in the neutron packages for the distro,
 wouldn't it be transparent to the operators?

I think the difficulty first lies with the distros.  If the required
new version isn't in an older version of the distro (e.g. Ubuntu
12.04) it may not be possible to update the new distro packages with
the new dependency.

If the distros are unable to provide the upgrade nicely to the
operators this is where it becomes difficult on operators because they
would have to go out of band to upgrade.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] default region name

2015-01-08 Thread Brant Knudson
On Thu, Jan 8, 2015 at 7:28 AM, Steven Dake sd...@redhat.com wrote:

 On 01/07/2015 10:21 PM, Zhou, Zhenzan wrote:

 Hi,

 Does anyone know why TripleO uses regionOne as default region name? A
 comment in the code says it's the default keystone uses.


 Zhenzan,

 I was going to point you here:

 https://bugs.launchpad.net/keystone/+bug/1400589

 But I see you already had commented in that bug.


If we're expecting OpenStack to generally do case-sensitive comparisons,
then we shouldn't be using the default MySQL collation for char columns[1].
Maybe the text columns should be BINARY rather than CHAR, or use a
different collation.

mysql describe region;
+--+--+--+-+-+---+
| Field| Type | Null | Key | Default | Extra |
+--+--+--+-+-+---+
| id   | varchar(255) | NO   | PRI | NULL|   |
| description  | varchar(255) | NO   | | NULL|   |
| parent_region_id | varchar(255) | YES  | | NULL|   |
| extra| text | YES  | | NULL|   |
| url  | varchar(255) | YES  | | NULL|   |
+--+--+--+-+-+---+
5 rows in set (0.00 sec)

mysql select collation(id) from region;
+-+
| collation(id)   |
+-+
| utf8_general_ci |
+-+
1 row in set (0.00 sec)

It's not easy to alter the collation when it's a foreign key.

mysql alter table region convert to character set utf8 collate utf8_bin;
ERROR 1025 (HY000): Error on rename of './keystone/#sql-3ffc_32' to
'./keystone/region' (errno: 150)

[1] http://dev.mysql.com/doc/refman/5.7/en/case-sensitivity.html

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.utils 1.2.0 released

2015-01-08 Thread Doug Hellmann
The Oslo team is pleased to announce the release of
oslo.utils 1.2.0: Oslo Utility library

There are two big changes in this release:

1. We have moved the code out of the oslo namespace package as part of
https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages

2. The performance improvements to strutils.mask_password() to address timeout
errors in gate-tempest-dsvm-largeops-* jobs (bug #1408362).

For more details, please see the git log history below and
 http://launchpad.net/oslo.utils/+milestone/1.2.0

Please report issues through launchpad:
 http://bugs.launchpad.net/oslo.utils



Changes in /home/dhellmann/repos/openstack/oslo.utils  1.1.0..1.2.0

6e0b861 Improve performance of strutils.mask_password
ca76fdc Move files out of the namespace package
44f36e3 Add method is_valid_port in netutils
626368a Support non-lowercase uuids in is_uuid_like
45b470c Add 'secret_uuid' in _SANITIZE_KEYS for strutils
2081aa9 Imported Translations from Transifex
6741748 Workflow documentation is now in infra-manual
edfc2c7 Improve error reporting in _get_my_ipv4_address()

  diffstat (except docs and test files):

 CONTRIBUTING.rst   |   7 +-
 .../locale/de/LC_MESSAGES/oslo.utils-log-info.po   |  41 ++
 oslo.utils/locale/de/LC_MESSAGES/oslo.utils.po |  37 ++
 oslo.utils/locale/oslo.utils-log-info.pot  |  20 +-
 oslo/utils/__init__.py |  26 +
 oslo/utils/_i18n.py|  37 --
 oslo/utils/encodeutils.py  |  84 +--
 oslo/utils/excutils.py | 102 +---
 oslo/utils/importutils.py  |  62 +--
 oslo/utils/netutils.py | 257 +
 oslo/utils/reflection.py   | 197 +--
 oslo/utils/strutils.py | 249 +
 oslo/utils/timeutils.py| 199 +--
 oslo/utils/units.py|  27 +-
 oslo/utils/uuidutils.py|  33 +-
 oslo_utils/__init__.py |   0
 oslo_utils/_i18n.py|  37 ++
 oslo_utils/encodeutils.py  |  95 
 oslo_utils/excutils.py | 113 
 oslo_utils/importutils.py  |  73 +++
 oslo_utils/netutils.py | 286 ++
 oslo_utils/reflection.py   | 208 
 oslo_utils/strutils.py | 266 +
 oslo_utils/timeutils.py| 210 
 oslo_utils/units.py|  38 ++
 oslo_utils/uuidutils.py|  45 ++
 setup.cfg  |   1 +
 tests/fake/__init__.py |  23 -
 tests/test_importutils.py  |  27 +-
 tests/test_netutils.py |  34 +-
 tests/test_strutils.py |   8 +
 tests/test_utils.py|  28 -
 tests/test_uuidutils.py|   3 +
 tests/test_warning.py  |  61 +++
 tools/perf_test_mask_password.py   |  52 ++
 tox.ini|   2 +-
 55 files changed, 3673 insertions(+), 1335 deletions(-)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Arne Wiebalck
Hmm. Not sure how widespread installations with multiple Ceph backends are 
where the
Cinder hosts have access to only one of the backends (which is what you assume, 
right?)
But, yes, if the volume type names are also the same (is that also needed for 
this to be a
problem?), this will be an issue ...

So, how about providing the information the scheduler does not have by 
introducing an
additional tag to identify ‘equivalent’ backends, similar to the way some 
people already
use the ‘host’ option?

Thanks!
 Arne


On 08 Jan 2015, at 15:11, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

The problem is that the scheduler doesn't currently have enough info to know 
which backends are 'equivalent' and which aren't. e.g. If you have 2 ceph 
clusters as cinder backends, they are indistinguishable from each other.

On 8 January 2015 at 12:14, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:
Hi,

The fact that volume requests (in particular deletions) are coupled with 
certain Cinder hosts is not ideal from an operational perspective:
if the node has meanwhile disappeared, e.g. retired, the deletion gets stuck 
and can only be unblocked by changing the database. Some
people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.

From what I see, even for backends that would support it (such as Ceph), Cinder 
currently does not provide means to ensure that any of
the hosts capable of performing a volume operation would be assigned the 
request in case the original/desired one is no more available,
right?

If that is correct, how about changing the scheduling of delete operation to 
use the same logic as create operations, that is pick any of the
available hosts, rather than the one which created a volume in the first place 
(for backends where that is possible, of course)?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Jan 8 2015

2015-01-08 Thread Anne Gentle
Welcome to 2015, all. Here's the latest in the docs world.

Web design for landing page is nearly done. Needs testing, so please take a
look at the output [0] in your favorite browser and let us know any issues
on the review patch itself. [1]

Web design plan [2] is to convert the End User Guide next. See
https://review.openstack.org/#/c/142437/ for the in-progress work on
DocBook to RST for the End User Guide. Still need a Sphinx theme for the
new web design.

Whoop whoop, the upgrade guide for Icehouse to Juno is complete! [3] Thanks
Matt for all that testing. Thanks also to Darren for the excellent editing.

The clouddocs-maven-plugin project should be splitting out the transforms
and branding for OpenStack from that for Rackspace. Look for a patch soon.

Tom Fifield is recruiting application developers who can write for a code
sprint to write a First App tutorial for OpenStack. See his post [4] on
the user-committee mailing list as well as the Fostering OpenStack Users
thread on the openstack-operators mailing list [5]

I want to talk about the install guides at the next doc team meeting, but
need to have Thomas Goirand there. Paging zigo. :)

Thanks all for the great reviews these last few weeks, no slow down for
docs over the hols.

Onward to 2015!

Anne

0.
http://docs-draft.openstack.org/69/142369/40/check/gate-openstack-manuals-tox-doc-publish-checkbuild/e9b3b46//publish-docs/www/
1. https://review.openstack.org/#/c/142369/
2. https://review.openstack.org/#/c/139154/
3. http://docs.openstack.org/openstack-ops/content/ch_ops_upgrades.html
4.
http://lists.openstack.org/pipermail/user-committee/2014-December/000340.html
5.
http://lists.openstack.org/pipermail/openstack-operators/2014-December/005786.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Jordan Pittier
Hi,
Some people apparently use the ‘host’ option in cinder.conf to make the
hosts indistinguishable, but this creates problems in other places.
I use shared storage mounted on several cinder-volume nodes, with host
flag set the same everywhere. Never ran into problems so far. Could you
elaborate on this creates problems in other places please ?

Thanks !
Jordan

On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck arne.wieba...@cern.ch wrote:

  Hmm. Not sure how widespread installations with multiple Ceph backends
 are where the
 Cinder hosts have access to only one of the backends (which is what you
 assume, right?)
 But, yes, if the volume type names are also the same (is that also needed
 for this to be a
 problem?), this will be an issue ...

  So, how about providing the information the scheduler does not have by
 introducing an
 additional tag to identify ‘equivalent’ backends, similar to the way some
 people already
 use the ‘host’ option?

  Thanks!
  Arne


  On 08 Jan 2015, at 15:11, Duncan Thomas duncan.tho...@gmail.com wrote:

  The problem is that the scheduler doesn't currently have enough info to
 know which backends are 'equivalent' and which aren't. e.g. If you have 2
 ceph clusters as cinder backends, they are indistinguishable from each
 other.

 On 8 January 2015 at 12:14, Arne Wiebalck arne.wieba...@cern.ch wrote:

 Hi,

 The fact that volume requests (in particular deletions) are coupled with
 certain Cinder hosts is not ideal from an operational perspective:
 if the node has meanwhile disappeared, e.g. retired, the deletion gets
 stuck and can only be unblocked by changing the database. Some
 people apparently use the ‘host’ option in cinder.conf to make the hosts
 indistinguishable, but this creates problems in other places.

 From what I see, even for backends that would support it (such as Ceph),
 Cinder currently does not provide means to ensure that any of
 the hosts capable of performing a volume operation would be assigned the
 request in case the original/desired one is no more available,
 right?

 If that is correct, how about changing the scheduling of delete operation
 to use the same logic as create operations, that is pick any of the
 available hosts, rather than the one which created a volume in the first
 place (for backends where that is possible, of course)?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread vishnu
Hi Zane,

I was wondering if we could push changes relating to backup stack removal
and to not load resources as part of stack. There needs to be a capability
to restart jobs left over by dead engines.

something like heat stack-operation --continue [git rebase --continue]

Had a chat with shady regarding this. IMO this would be a valuable
enhancement. Notification based lead sharing can be taken up upon
completion.

Your thoughts.


-Vishnu
irc: ckmvishnu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] default region name

2015-01-08 Thread Ben Nemec
On 01/08/2015 06:38 AM, Zhou, Zhenzan wrote:
 Thank you, Derek.
 So we could also change TripleO register-endpoint/setup-endpoint to use 
 RegionOne.

Our policy in TripleO is to use project defaults whenever possible, in
the interest of making sure our project defaults are sane for all users.
 If we find one that's bad we want to push for a fix in the project
rather than fixing it downstream in TripleO.

That said, this is a tricky situation.  As noted in the Keystone review,
changing the default for the region has the potential to break existing
users of the client.  If we're going to change the default we need a
deprecation cycle to give people notice that their stuff is going to
break.  I do think it's something we should do though, because the
mismatch in defaults between the Keystone client and other clients is
also a problem, and arguably a bigger one since it will cause issues for
all new users, who are least likely to be able to figure out why they
can't talk to their cloud.

So I would prefer not to change this in TripleO and instead proceed on
rationalizing the defaults in the clients.  Step one seems to be picking
a common default and adding deprecation warnings to any client not
currently using that default.

-Ben

 
 BR
 Zhou Zhenzan
 -Original Message-
 From: Derek Higgins [mailto:der...@redhat.com] 
 Sent: Thursday, January 8, 2015 5:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [TripleO] default region name
 
 On 08/01/15 05:21, Zhou, Zhenzan wrote:
 Hi,

 Does anyone know why TripleO uses regionOne as default region name? A 
 comment in the code says it's the default keystone uses. 
 But I cannot find any regionOne in keystone code. Devstack uses 
 RegionOne by default and I do see lots of RegionOne in keystone code.
 
 Looks like this has been changing in various places
 https://bugs.launchpad.net/keystone/+bug/1252299
 
 I guess the default the code is referring to is in keystoneclient
 http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/keystoneclient/v2_0/shell.py#n509
 
 
 

 stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne * 
 scripts/register-endpoint:26:REGION=regionOne # NB: This is the default 
 keystone uses.
 scripts/register-endpoint:45:echo -r, --region  -- Override the 
 default region 'regionOne'.
 scripts/setup-endpoints:33:echo -r, --region-- Override 
 the default region 'regionOne'.
 scripts/setup-endpoints:68:REGION=regionOne #NB: This is the keystone 
 default.
 stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne 
 ../tripleo-heat-templates/ 
 stack@u140401:~/openstack/tripleo-incubator$  grep -rn regionOne 
 ../tripleo-image-elements/ 
 ../tripleo-image-elements/elements/tempest/os-apply-config/opt/stack/t
 empest/etc/tempest.conf:10:region = regionOne 
 ../tripleo-image-elements/elements/neutron/os-apply-config/etc/neutron
 /metadata_agent.ini:3:auth_region = regionOne 
 stack@u140401:~/openstack/keystone$ grep -rn RegionOne * | wc -l
 130
 stack@u140401:~/openstack/keystone$ grep -rn regionOne * | wc -l
 0

 Another question is that TripleO doesn't export OS_REGION_NAME in 
 stackrc.  So when someone source devstack rc file to do something and then 
 source TripleO rc file again, the OS_REGION_NAME will be the one set by 
 devstack rc file.
 I know this may be strange but isn't it better to use the same default value?
 
 We should probably add that to our various rc files, not having it there is 
 probably the reason we used keystoneclients default in the first place.
 

 Thanks a lot.

 BR
 Zhou Zhenzan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dropping namespace packages

2015-01-08 Thread Ihar Hrachyshka

On 01/05/2015 04:51 PM, Doug Hellmann wrote:

As each library is released, we will send release notes to this list, as usual. At that point 
the Oslo liaisons should start planning patches to change imports in their projects from 
oslo.foo to “oslo_foo. The old imports should still work for now, but new 
features will not be added to the old namespace, so over time it will be necessary to make the 
changes anyway. We are likely to remove the old namespace package completely during the next 
release cycle, but that hasn't been decided.


Making the switch probably requires us to add some hacking rule that 
would forbid old namespace based imports, right? Do we by chance have 
such a rule implemented anywhere?


/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Duncan Thomas
There are races, e.g. do snapshot and delete at the same time, backup and
delete at the same time, etc. The race windows are pretty tight on ceph but
they are there. It is worse on some other backends

On 8 January 2015 at 17:50, Jordan Pittier jordan.pitt...@scality.com
wrote:

 Hi,
 Some people apparently use the ‘host’ option in cinder.conf to make the
 hosts indistinguishable, but this creates problems in other places.
 I use shared storage mounted on several cinder-volume nodes, with host
 flag set the same everywhere. Never ran into problems so far. Could you
 elaborate on this creates problems in other places please ?

 Thanks !
 Jordan

 On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck arne.wieba...@cern.ch
 wrote:

  Hmm. Not sure how widespread installations with multiple Ceph backends
 are where the
 Cinder hosts have access to only one of the backends (which is what you
 assume, right?)
 But, yes, if the volume type names are also the same (is that also needed
 for this to be a
 problem?), this will be an issue ...

  So, how about providing the information the scheduler does not have by
 introducing an
 additional tag to identify ‘equivalent’ backends, similar to the way some
 people already
 use the ‘host’ option?

  Thanks!
  Arne


  On 08 Jan 2015, at 15:11, Duncan Thomas duncan.tho...@gmail.com wrote:

  The problem is that the scheduler doesn't currently have enough info to
 know which backends are 'equivalent' and which aren't. e.g. If you have 2
 ceph clusters as cinder backends, they are indistinguishable from each
 other.

 On 8 January 2015 at 12:14, Arne Wiebalck arne.wieba...@cern.ch wrote:

 Hi,

 The fact that volume requests (in particular deletions) are coupled with
 certain Cinder hosts is not ideal from an operational perspective:
 if the node has meanwhile disappeared, e.g. retired, the deletion gets
 stuck and can only be unblocked by changing the database. Some
 people apparently use the ‘host’ option in cinder.conf to make the hosts
 indistinguishable, but this creates problems in other places.

 From what I see, even for backends that would support it (such as Ceph),
 Cinder currently does not provide means to ensure that any of
 the hosts capable of performing a volume operation would be assigned the
 request in case the original/desired one is no more available,
 right?

 If that is correct, how about changing the scheduling of delete
 operation to use the same logic as create operations, that is pick any of
 the
 available hosts, rather than the one which created a volume in the first
 place (for backends where that is possible, of course)?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Arne Wiebalck
Hi Jordan,

As Duncan pointed out there may be issues if you have multiple backends
and indistinguishable nodes (which you could  probably avoid by separating
the hosts per backend and use different “host” flags for each set).

But also if you have only one backend: the “host flag will enter the ‘services'
table and render the host column more or less useless. I imagine this has an
impact on things using the services table, such as “cinder-manage” (how
does your “cinder-manage service list” output look like? :-), and it may make it
harder to tell if the individual services are doing OK, or to control them.

I haven’t run Cinder with identical “host” flags in production, but I imagine
there may be other areas which are not happy about indistinguishable hosts.

Arne


On 08 Jan 2015, at 16:50, Jordan Pittier 
jordan.pitt...@scality.commailto:jordan.pitt...@scality.com wrote:

Hi,
Some people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.
I use shared storage mounted on several cinder-volume nodes, with host flag 
set the same everywhere. Never ran into problems so far. Could you elaborate on 
this creates problems in other places please ?

Thanks !
Jordan

On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:
Hmm. Not sure how widespread installations with multiple Ceph backends are 
where the
Cinder hosts have access to only one of the backends (which is what you assume, 
right?)
But, yes, if the volume type names are also the same (is that also needed for 
this to be a
problem?), this will be an issue ...

So, how about providing the information the scheduler does not have by 
introducing an
additional tag to identify ‘equivalent’ backends, similar to the way some 
people already
use the ‘host’ option?

Thanks!
 Arne


On 08 Jan 2015, at 15:11, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

The problem is that the scheduler doesn't currently have enough info to know 
which backends are 'equivalent' and which aren't. e.g. If you have 2 ceph 
clusters as cinder backends, they are indistinguishable from each other.

On 8 January 2015 at 12:14, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:
Hi,

The fact that volume requests (in particular deletions) are coupled with 
certain Cinder hosts is not ideal from an operational perspective:
if the node has meanwhile disappeared, e.g. retired, the deletion gets stuck 
and can only be unblocked by changing the database. Some
people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.

From what I see, even for backends that would support it (such as Ceph), Cinder 
currently does not provide means to ensure that any of
the hosts capable of performing a volume operation would be assigned the 
request in case the original/desired one is no more available,
right?

If that is correct, how about changing the scheduling of delete operation to 
use the same logic as create operations, that is pick any of the
available hosts, rather than the one which created a volume in the first place 
(for backends where that is possible, of course)?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-08 Thread Clint Byrum
Excerpts from Chris Jones's message of 2015-01-08 10:16:14 -0800:
 Hi
 
  On 8 Jan 2015, at 17:58, Clint Byrum cl...@fewbar.com wrote:
  
  Excerpts from Steven Hardy's message of 2015-01-08 09:37:55 -0800:
  So you can probably setup a devpi instance locally, and upload the
  commits you want to it, and then build the image with the 'pypi' element
 
 Given that we have a pretty good release frequency of all our tools, is this 
 burden on devs/testers actually justified at this point, versus the potential 
 consistency we could have with source repo flexibility in other openstack 
 components?
 

We've been discussing in #tripleo and I think you're right. I think we
can solve this by just switching to source-repositories and providing
a relatively simple tool to set release tags when people want released
versions only.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Host health monitoring

2015-01-08 Thread Joe Gordon
On Sun, Jan 4, 2015 at 7:08 PM, Andrew Beekhof abeek...@redhat.com wrote:


  On 9 Dec 2014, at 1:20 am, Roman Dobosz roman.dob...@intel.com wrote:
 
  On Wed, 3 Dec 2014 08:44:57 +0100
  Roman Dobosz roman.dob...@intel.com wrote:
 
  I've just started to work on the topic of detection if host is alive or
  not: https://blueprints.launchpad.net/nova/+spec/host-health-monitoring
 
  I'll appreciate any comments :)
 
  I've submitted another blueprint, which is closely bounded with previous
 one:
 
 https://blueprints.launchpad.net/nova/+spec/pacemaker-servicegroup-driver
 
  The idea behind those two blueprints is to enable Nova to be aware of
 host
  status, not only services that run on such. Bringing Pacemaker as a
 driver for
  servicegroup will provide us with two things: fencing and reliable
 information
  about host state, therefore we can avoid situations, where some actions
 will
  misinterpret information like service state with host state.
 
  Comments?


I would rather move the servicegroup concept to use tooz and put things
like Pacemaker in there (https://review.openstack.org/138607)



 Sounds like an excellent idea, is there code for these blueprints? If so,
 how do I get to see it?
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Kilo Release Status - just passed spec freeze

2015-01-08 Thread John Garbutt
Hi all,

With the release of kilo-1 we have frozen the approval of new specs for kilo.

This is to make sure we can focus on our agreed kilo priorities:
http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html

As always, there are exceptions, here is how:

1) email ML, [nova] Request Spec Freeze Exception in the subject
2) nova-drivers will review the spec in gerrit, at normal
3) either the spec gets a -2 for kilo, or the spec gets approved, in
the usual way

Hard deadline for spec approval is 22nd January
i.e. two weeks before kilo-2
https://wiki.openstack.org/wiki/Kilo_Release_Schedule

nova-drivers are encouraged to give priority to those on the
priorities list, and try to ensure there is code that looks ready to
review, for any non-priority specs that are approved.

For more context, see the meeting logs from today's meeting:
http://eavesdrop.openstack.org/meetings/nova/2015/nova.2015-01-08-14.00.log.html


As a reminder, some future deadlines for kilo:

22nd Jan

All blueprints that are not NeedsCodeReview (i.e. all patches ready
for review), will be deferred to the next release, unless it is
related to the kilo priorities.


5th Feb (kilo-2)

Feature Freeze for all non-priority blueprints. There will be
exceptions, for a smaller number of blueprints, assuming there is
review bandwidth.

Bugs will continue to be merged, as normal.


5th March

FeatureProposalFreeze for all code, and we meet up with the usual
release pattern.


Hopefully that helps clear things up. Catch me on IRC for questions.

Thanks,
johnthetubaguy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >