[openstack-dev] [sahara] client release 0.7.7

2015-02-13 Thread Sergey Lukjanov
Hi folks,

we have a new python-saharaclient release with SSL and indirect access
support, as well as bunch of bug fixes.

https://launchpad.net/python-saharaclient/+milestone/0.7.7

Bump in global requirements: https://review.openstack.org/#/c/155428/

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Fuel plugin builder tagging and pypi publishing

2015-02-13 Thread Evgeniy L
Hi,

Since fuel plugins are going to be moved [1] from fuel-plugins repository
[2],
the only project which will be there is fuel plugin builder and plugins
examples
which are related to fuel plugin builder testing.

Currently fuel plugin builder has its own release cycle, but we don't have
tags
for these versions, the suggestion is after plugins are removed from the
repository,
we should push tags for previous fpb releases.

Tags can help with another problem, currently I manually publish new version
on pypi, we can automatically build and publish new version after tag is
pushed.

What do you think about that?

Thanks,

[1] https://review.openstack.org/#/c/151143/
[2] https://github.com/stackforge/fuel-plugins
[3]
https://github.com/stackforge/fuel-plugins/blob/master/fuel_plugin_builder/CHANGELOG.md
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Fuel plugin builder tagging and pypi publishing

2015-02-13 Thread Sebastian Kalinowski
+1 for the whole idea, I really waited for it until first release of
fuel-plugin-builder.

Without tags it's hard to say which commit is included in PyPI release.
Also automation of release process is a really nice thing and make it more
transparent.

2015-02-13 9:59 GMT+01:00 Evgeniy L e...@mirantis.com:

 Hi,

 Since fuel plugins are going to be moved [1] from fuel-plugins repository
 [2],
 the only project which will be there is fuel plugin builder and plugins
 examples
 which are related to fuel plugin builder testing.

 Currently fuel plugin builder has its own release cycle, but we don't have
 tags
 for these versions, the suggestion is after plugins are removed from the
 repository,
 we should push tags for previous fpb releases.

 Tags can help with another problem, currently I manually publish new
 version
 on pypi, we can automatically build and publish new version after tag is
 pushed.

 What do you think about that?

 Thanks,

 [1] https://review.openstack.org/#/c/151143/
 [2] https://github.com/stackforge/fuel-plugins
 [3]
 https://github.com/stackforge/fuel-plugins/blob/master/fuel_plugin_builder/CHANGELOG.md

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Stepping down as a Horizon core reviewer

2015-02-13 Thread Julie Pichon
Hi folks,

In the spirit of stepping down considerately [1], I'd like to ask to be
removed from the core and drivers team for Horizon and associated
projects. I'm embarking on some fun adventures far far away and won't
have any time to spare for OpenStack for a while.

I removed my name from places in the wiki where I was a point of
contact. Some activities I was involved with are left in good hands
already, but that does leave a couple more Cross-Project Liaisons [2]
spots now empty for Horizon. If you're interested in helping out as the
Horizon docs liaison (answer questions from the docs team, bonus points
for following the horizon tag in the openstack-manuals tracker) or QA
liaison (particular interest in integration testing and seeing
where/how our test suite might fit in with Tempest someday?), please
step up! Don't let all the tasks fall back onto the PTL :-)

Working on OpenStack and Horizon has been enlightening and
humbling. Thank you for the ride everyone, and wishing y'all all the
best! :)

Julie

[1] http://www.openstack.org/legal/community-code-of-conduct/
[2] https://wiki.openstack.org/wiki/CrossProjectLiaisons

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Versioning, branching, tagging

2015-02-13 Thread Evgeniy L
Hi Andrey,

I agree that it's useful to know compatibility between releases and
previous versions
of plugins, but I'm not 100% sure that tag comments is the best place to
keep such
information, does it make sense to use Changelog.txt file for such
information instead?

Regarding to versioning itself, Semantic Versioning is going to be
mandatory in
the nearest releases because of plugins patching story [1].

Thanks,

[1] https://review.openstack.org/#/c/151256/

On Thu, Feb 12, 2015 at 7:31 PM, Andrey Epifanov aepifa...@mirantis.com
wrote:


 Hello!

 I would like to discuss which policy we want to recommend for the
 versioning, branching and tagging for FUEL plugins and how it should
 correlate with FUEL/MOS versions/releases.
 According Fuel Plug-in Guide
 http://docs.mirantis.com/openstack/fuel/fuel-6.0/plugin-dev.html#metadata-yaml
 we recommend to use the following standard ( http://semver.org/Semantic
 Versioning 2.0.0 http://semver.org/) http://semver.org/ for the
 plugin versioning. IMO, it is a good idea, because in this case we have
 independent development process and release cycle from the FUEL/MOS. But
 very often we will need to find out which versions of FUEL/MOS is supported
 by each version of plugin and vice versa. Of course, we can checkout each
 version and look into metadata.yaml which contain a list of supported
 releases of FUEL/MOS, but it is not very convenient. For this purpose we
 can store this list in the commit message for each tag. It is allow us very
 quick and easily to get dependencies between plugin versions and FUEL/MOS
 releases, usging simple command: *git tag -n10*. It  also will be very
 convenient for the CI process.

 What do you think about it?



 *Thanks and Best Regards, Andrey.*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Etherpad for volume replication created ...

2015-02-13 Thread Danny Al-Gaaf
Hi Jay,

do you have a link to the etherpad?

Danny

Am 13.02.2015 um 05:54 schrieb Jay S. Bryant:
 All,
 
 Several members of the Cinder team and I were discussing the
 current state of volume replication while trying to figure out the
 best way to resolve bug 1383524 [1].  The outcome of the discussion
 was a decision to hold off on integrating volume replication
 support for additional drivers.
 
 I took notes from the discussion and have put them in the etherpad.
 We can use that, first thing in L, as a starting point to rework
 and fix replication support.
 
 Please let me know if you have any questions and feel free to
 update the etherpad with addition thoughts.
 
 Thanks! Jay
 
 
 [1] https://bugs.launchpad.net/cinder/+bug/1383524--  Periodic
 update replication status causing issues
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-13 Thread Angus Lees
On Fri Feb 13 2015 at 5:45:36 PM Eric Windisch e...@windisch.us wrote:

 ᐧ


 from neutron.agent.privileged.commands import ip_lib as priv_ip
 def foo():
 # Need to create a new veth interface pair - that usually
 requires root/NET_ADMIN
 priv_ip.CreateLink('veth', 'veth0', peer='veth1')

 Because we now have elevated privileges directly (on the privileged
 daemon side) without having to shell out through sudo, we can do all sorts
 of nicer things like just using netlink directly to configure networking.
 This avoids the overhead of executing subcommands, the ugliness (and
 danger) of generating command lines and regex parsing output, and make us
 less reliant on specific versions of command line tools (since the kernel
 API should be very stable).


 One of the advantages of spawning a new process is being able to use flags
 to clone(2) and to set capabilities. This basically means to create
 containers, by some definition. Anything you have in a privileged daemon
 or privileged process ideally should reduce its privilege set for any
 operation it performs. That might mean it clones itself and executes
 Python, or it may execvp an executable, but either way, the new process
 would have less-than-full-privilege.

 For instance, writing a file might require root access, but does not need
 the ability to load kernel modules. Changing network interfaces does not
 need access to the filesystem, no more than changes to the filesystem needs
 access to the network. The capabilities and namespaces mechanisms resolve
 these security conundrums and simplify principle of least privilege.


Agreed wholeheartedly, and I'd appreciate your thoughts on how I'm using
capabilities in this change.  The privsep daemon limits itself to a
particular set of capabilities (and drops root). The assumption is that
most OpenStack services commonly need the same small set of capabilities to
perform their duties (neutron - net_admin+sys_admin for example), so it
makes sense to reuse the same privileged process.

If we have a single service that somehow needs to frequently use a broad
swathe of capabilities then we might want to break it up further somehow
between the different internal aspects (multiple privsep helpers?) - but is
there such a case?   There's also no problems with mixing privsep for
frequent operations with the existing sudo/rootwrap approach for
rare/awkward cases.

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Fuel plugin builder tagging and pypi publishing

2015-02-13 Thread Igor Kalnitsky
+1.

On Fri, Feb 13, 2015 at 11:41 AM, Sebastian Kalinowski
skalinow...@mirantis.com wrote:
 +1 for the whole idea, I really waited for it until first release of
 fuel-plugin-builder.

 Without tags it's hard to say which commit is included in PyPI release.
 Also automation of release process is a really nice thing and make it more
 transparent.

 2015-02-13 9:59 GMT+01:00 Evgeniy L e...@mirantis.com:

 Hi,

 Since fuel plugins are going to be moved [1] from fuel-plugins repository
 [2],
 the only project which will be there is fuel plugin builder and plugins
 examples
 which are related to fuel plugin builder testing.

 Currently fuel plugin builder has its own release cycle, but we don't have
 tags
 for these versions, the suggestion is after plugins are removed from the
 repository,
 we should push tags for previous fpb releases.

 Tags can help with another problem, currently I manually publish new
 version
 on pypi, we can automatically build and publish new version after tag is
 pushed.

 What do you think about that?

 Thanks,

 [1] https://review.openstack.org/#/c/151143/
 [2] https://github.com/stackforge/fuel-plugins
 [3]
 https://github.com/stackforge/fuel-plugins/blob/master/fuel_plugin_builder/CHANGELOG.md

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
Hi all,

We have almost year old (from last update) reviews still in the queue for 
glance. The discussion was initiated on yesterday's meeting for adopting 
abandon policy for stale changes.

The documentation can be found from 
https://etherpad.openstack.org/p/glance-cleanout-of-inactive-PS and any input 
would be appreciated. For your convenience current state below:

Glance - Cleanout of inactive change proposals from review


We Should start cleaning out our review list to keep the focus on changes that 
has momentum. Nova is currently abandoning change proposals that has been 
inactive for 4 weeks.

Proposed action (if all of the following is True, abandon the PS):

  1.  The PS has -1/-2 (including Jenkins)

  1.  The change is proposed to glance, glance_store or python-glanceclient; 
specs should not be abandoned as their workflow is much slower

  1.  No activity for 28 days from Author/Owner after the -1/-2

  1.  There has been  query made to the owner to update the patch between 5 and 
10 days  before abandoning (comment on PS/Bug or something similar)


  *   Let's be smart on this. Flexibility is good on holiday seasons, during 
feature freeze, etc.




-  Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What should openstack-specs review approval rules be ?

2015-02-13 Thread Flavio Percoco

On 28/01/15 14:25 +0100, Thierry Carrez wrote:

Hi everyone,

When we first introduced the cross-project specs (specs for things that
may potentially affect all OpenStack projects, or where more convergence
is desirable), we defaulted to rather simple rules for approval:

- discuss the spec in a cross-project meeting
- let everyone +1/-1 and seek consensus



I don't mind having the TC leading the direction of cross-project
specs. However, I think giving +1/-1 to the rest of the community has
some extra value.

Besides being able to express opinion easily, it makes it clear at a
first glance what the general feeling about a spec is. Furthermore, in
very active reviews, there's a chance that some comments could be,
unintentionally, skipped due to the activity on the spec.

Having an explicit, but not definitive, vote for the community is
important and also inclusive.

Flavio


- wait for the affected PTLs to vote
- wait even more
- tally the votes (and agree a consensus is reached) during a TC meeting
- give +2/Worflow+1 to all TC members to let them push the Go button

However, the recent approval of the Log guidelines
(https://review.openstack.org/#/c/132552/) revealed that those may not
be the rules we are looking for.

Sean suggested that only the TC chair should be able to workflow+1 to
avoid accidental approval.

Doug suggested that we should use the TC voting rules (7 YES, or at
least 5 YES and more YES than NO) on those.

In yesterday's meeting, Sean suggested that TC members should still have
a -2-like veto (if there is no TC consensus on the fact that community
consensus is reached, there probably is no real consensus).

There was little time to discuss this more in yesterday's TC meeting, so
I took the action to push that discussion to the ML.

So what is it we actually want for that repository ? In a world where
Gerrit can do anything, what would you like to have ?

Personally, I want our technical community in general, and our PTLs/CPLs
in particular, to be able to record their opinion on the proposed
cross-project spec. Then, if consensus is reached, the spec should be
approved.

This /could/ be implemented in Gerrit by giving +1/-1 to everyone to
express technical opinion and +2/-2 to TC members to evaluate consensus
(with Workflow+1 to the TC chair to mark when all votes are collected
and consensus is indeed reached).

Other personal opinions on how you'd like this repository reviews to be
run ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpMnOREag89a.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-13 Thread Miguel Ángel Ajo
We have an ongoing effort in neutron to move to rootwrap-daemon.  

https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/rootwrap-daemon-mode,n,z

To speed up multiple system calls, and be able to spawn daemons inside 
namespaces.

I have to read a bit what are the good  bad points of privsep.   

The advantage of rootwrap-daemon, is that we don’t need to change all our 
networking libraries across neutron,
and we kill the sudo/rootwrap spawn for every call, yet keeping the rootwrap 
permission granularity.

Miguel Ángel Ajo


On Friday, 13 de February de 2015 at 10:54, Angus Lees wrote:

 On Fri Feb 13 2015 at 5:45:36 PM Eric Windisch e...@windisch.us 
 (mailto:e...@windisch.us) wrote:
  ᐧ

   from neutron.agent.privileged.commands import ip_lib as priv_ip
   def foo():
   # Need to create a new veth interface pair - that usually 
   requires root/NET_ADMIN
   priv_ip.CreateLink('veth', 'veth0', peer='veth1')

   Because we now have elevated privileges directly (on the privileged 
   daemon side) without having to shell out through sudo, we can do all 
   sorts of nicer things like just using netlink directly to configure 
   networking.  This avoids the overhead of executing subcommands, the 
   ugliness (and danger) of generating command lines and regex parsing 
   output, and make us less reliant on specific versions of command line 
   tools (since the kernel API should be very stable).
   
  One of the advantages of spawning a new process is being able to use flags 
  to clone(2) and to set capabilities. This basically means to create 
  containers, by some definition. Anything you have in a privileged daemon 
  or privileged process ideally should reduce its privilege set for any 
  operation it performs. That might mean it clones itself and executes 
  Python, or it may execvp an executable, but either way, the new process 
  would have less-than-full-privilege.
   
  For instance, writing a file might require root access, but does not need 
  the ability to load kernel modules. Changing network interfaces does not 
  need access to the filesystem, no more than changes to the filesystem needs 
  access to the network. The capabilities and namespaces mechanisms resolve 
  these security conundrums and simplify principle of least privilege.
  
 Agreed wholeheartedly, and I'd appreciate your thoughts on how I'm using 
 capabilities in this change.  The privsep daemon limits itself to a 
 particular set of capabilities (and drops root). The assumption is that most 
 OpenStack services commonly need the same small set of capabilities to 
 perform their duties (neutron - net_admin+sys_admin for example), so it 
 makes sense to reuse the same privileged process.
  
 If we have a single service that somehow needs to frequently use a broad 
 swathe of capabilities then we might want to break it up further somehow 
 between the different internal aspects (multiple privsep helpers?) - but is 
 there such a case?   There's also no problems with mixing privsep for 
 frequent operations with the existing sudo/rootwrap approach for rare/awkward 
 cases.
  
  - Gus  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-13 Thread Rossella Sblendido


On 02/12/2015 02:36 PM, Salvatore Orlando wrote:
 - I promised a non blocking algorithm for IP allocation. The one I was
 developing was based on specifying the primary key on the ip_requests
 table in a way that it would prevent two concurrent requests from
 getting the same address, and would just retry getting an address until
 the primary key constraint was satisfied. However, recent information
 emerged on MySQL galera's (*) data set [2] certification  clarified that
 this kind of algorithm would still result in a deadlock error from
 failed data set certification. It is worth noting that in this case a
 solution based on traditional compare-and-swap is not possible because
 concurrent requests would be inserting data at the same time. I am now
 working on an alternative solution, and I would like to first implement
 a PoC for it (so that I can prove it works).

Would something like the following work: insert the data in the DB, if
any error is got open a new transaction and try again ? enikanorov
proposed a retry mechanism here [1] . Can't wait for your POC! I had
been playing a while in the past to try to remove the locking from the
IP allocation, it's hard!

cheers,

Rossella


[1] https://review.openstack.org/#/c/149261/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] File-backed glance scrubber queue

2015-02-13 Thread Flavio Percoco

On 12/02/15 09:34 -0800, Chris St. Pierre wrote:

Yeah, that commit definitely disables the file-backed queue -- it certainly
*looks* like we want to be rid of it, but all of the code is left in place and
even updated to support the new format. So my confusion remains. Hopefully Zhi
Yan can clarify.

Link added. Thanks.



Hi Chris,

I touched bases with Zhi Yan and my understanding is right. Since
Juno, we switched to using a queue based on database instead of file
and the file queue is considered redundant and on its way to be
deprecated.

I'll also reply on the review,
Thanks for bringing this up,
Flavio



On Thu, Feb 12, 2015 at 12:59 AM, Flavio Percoco fla...@redhat.com wrote:

   On 11/02/15 13:42 -0800, Chris St. Pierre wrote:

   I recently proposed a change to glance to turn the file-backed scrubber
   queue
   files into JSON: https://review.openstack.org/#/c/145223/

   As I looked into it more, though, it turns out that the file-backed
   queue is no
   longer usable; it was killed by the implementation of this
   blueprint: https://
   blueprints.launchpad.net/glance/+spec/image-location-status

   But what's not clear is if the implementation of that blueprint should
   have
   killed the file-backed scrubber queue, or if that was even intended.
   Two things
   contribute to the lack of clarity:

   1. The file-backed scrubber code was left in, even though it is
   unreachable.

   2. The ordering of the commits is strange. Namely, commit 66d24bb
   (https://
   review.openstack.org/#/c/67115/) killed the file-backed queue, and
   then,
   *after* that change, 70e0a24 (https://review.openstack.org/#/c/67122/)
   updates
   the queue file format. So it's not clear why the queue file format
   would be
   updated if it was intended that the file-backed queue was no longer
   usable.

   Can someone clarify what was intended here? If killing the file-backed
   scrubber
   queue was deliberate, then let's finish the job and excise that code.
   If not,
   then let's make sure that code is reachable again, and I'll resurrect
   my
   blueprint to make the queue files suck less.

   Either way I'm happy to make the changes, I'm just not sure what the
   goal of
   these changes was, and how to properly proceed.

   Thanks for any clarification anyone can offer.


   I believe the commit you're looking for is this one:
   f338a5c870a36e493f8c818fa783942d1e0565a4

   There the scrubber queue was switched on purpose, which leads to the
   conclusion that we're moving away from it. I've not participated in
   discussions around the change related to the scrubber queue so I'll
   let Zhi Yan weight in here.

   Thanks for bringing this up,
   Flavio

   P.S: Would you mind putting a link to this discussion on the spec
   review?





   --
   Chris St. Pierre


   
__
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?
   subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   --
   @flaper87
   Flavio Percoco
  
   __

   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Chris St. Pierre



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgp66NNP94Yfg.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Boris Pavlovic
Hi,

I believe that keeping review queue clean is the great idea.
But I am not sure that set of these rules is enough to abandon patches.

Recently I wrote blogpost related to making OpenStack community more user
friendly:
http://boris-42.me/thoughts-on-making-openstack-community-more-user-friendly/

tl;dr;

Patches on review are great source of information what is missing in
project.
Removing them from queue means losing this essential information. The result
of such actions is that project doesn't face users requirements which is
quite bad...

What if that project team continue work on all abandoned patches  that
are covering
valid use cases and finish them?

Best regards,
Boris Pavlovic



On Fri, Feb 13, 2015 at 3:52 PM, Flavio Percoco fla...@redhat.com wrote:

 On 13/02/15 11:06 +, Kuvaja, Erno wrote:

 Hi all,

 We have almost year old (from last update) reviews still in the queue for
 glance. The discussion was initiated on yesterday’s meeting for adopting
 abandon policy for stale changes.

 The documentation can be found from https://etherpad.openstack.org/p/
 glance-cleanout-of-inactive-PS and any input would be appreciated. For
 your
 convenience current state below:


 Thanks for putting this together. I missed the meeting yday and this
 is important.

  Glance - Cleanout of inactive change proposals from review


 We Should start cleaning out our review list to keep the focus on changes
 that
 has momentum. Nova is currently abandoning change proposals that has been
 inactive for 4 weeks.



 Proposed action (if all of the following is True, abandon the PS):

 1. The PS has -1/-2 (including Jenkins)


 I assume you're talking about voting -1/-2 and not Workflow, right?
 (you said jenkins afterall but just for the sake of clarity).

  2. The change is proposed to glance, glance_store or python-glanceclient;
specs should not be abandoned as their workflow is much slower

 3. No activity for 28 days from Author/Owner after the -1/-2


 I'd reword this in No activity. This includes comments, feedback,
 discussions and or other committers taking over a patch.

  4. There has been  query made to the owner to update the patch between 5
 and
10 days  before abandoning (comment on PS/Bug or something similar)

  ● Let's be smart on this. Flexibility is good on holiday seasons, during
feature freeze, etc.


 +2 to the above, I like it.

 Thanks again,
 Flavio

 --
 @flaper87
 Flavio Percoco

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/13/2015 01:42 PM, Miguel Ángel Ajo wrote:
 Hi, Ihar  Jiri, thank you for pointing this out.
 
 I’m working on the following items:
 
 1) Doing Openflow traffic filtering (stateful firewall) based on
 OVS+CT[1] patch, which may
 eventually merge. Here I want to build a good amount of benchmarks
 to be able to compare
 the current network iptables+LB solution to just openflow.
 
  Openflow filtering should be fast, as it’s quite smart at using
 hashes to match OF rules
  in the kernel megaflows (thanks Jiri  T.Graf for explaining me this)

  The only bad part is that we would have to dynamically change more
 rules based on security
 group changes (now we do that via ip sets without reloading all the rules).
 
   To do this properly, we may want to make the OVS plugin a real OF
 controller to be able to
 push OF rules to the bridge without the need of calling ovs-ofctl on the
 command line all the time.

I am actually a bit worried about this project being bound so tightly to
openvswitch. There are common linux alternatives in the wild that do not
require it at all (tc, eBPF). Of course, openvswitch could be utilized
in some exotic cases, like controlling HyperV instances in Windows
environment, but for usual Linux platform, utilizing something like
openvswitch seems to be an overkill, while we don't use any of
controller separation features and do propagate updates via AMQP to each
controlled node.

 
 2) Using OVS+OF to do QoS
 
 other interesting stuff to look at:
 
 3) Doing routing in OF too, thanks to the NAT capabilities of having OVS+CT 
 
 4) The namespace problem, what kinds of statistics get broken by moving
 ports into namespaces now?,

I think some of those are already mentioned in the ovs tracker bug:
- - https://bugzilla.redhat.com/show_bug.cgi?id=1160340

Specifically, mac_in_use that is used to track mac addresses attached to
ports; or ifindex that is used for SNMP.

 the short-term fix could be using vets, but “namespaceable” OVS
 ports would be perfect, yet I understand
 the change is a big feature.
 
 If we had 1  3, may be 4 wouldn’t be a problem anymore.

I don't believe so. We still need namespaces for e.g. DHCP namespaces,
or in any other cases where we need to run a service that is not capable
of binding to a specific interface.

 
 [1] https://github.com/justinpettit/ovs/tree/conntrack
 
 Miguel Ángel Ajo
 
 On Friday, 13 de February de 2015 at 13:14, Ihar Hrachyshka wrote:
 
 Hi neutroners,
 
 we** had several conversations recently with our Red Hat fellows who
 work on openvswitch (Jiri Benc and Jiri Pirko) regarding the way
 neutron utilizes their software. Those were beneficial to both sides
 to understand what we do right and wrong. I was asked to share some of
 the points from those discussions with broader community.
 
 ===
 
 One of the issues that came up during discussions is the way neutron
 connects ovs ports to namespaces. The short story is that openvswitch
 is not designed with namespaces in mind, and the fact that moving its
 ports into a different namespace works for neutron is mere
 coincidence, and is actually considered as a bug by openvswitch guys.
 
 It's not just broken in theory from software design standpoint, but
 also in practice. Specifically,
 
 1. ovsdb dump does not work properly for namespaces:
 - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
 
 This results in wrong statistics and other data collected for these ports;
 
 2. We suspect that the following kernel crash is triggered because of
 our usage of the feature that is actually a bug:
 - https://bugs.launchpad.net/neutron/+bug/1418097
 
 Quoting Jiri Benc,
 
 The problem is openvswitch does not support its ports to be moved to
 a different name space. The fact that it's possible to do that is a
 bug - such operation should have been denied. Unfortunately, due to a
 missing check, it's not been denied. Such setup does not work
 reliably, though, and leads to various issues from incorrect resource
 accounting to kernel crashes.
 
 We're aware of the bug but we don't have any easy way to fix it. The
 most obvious option, disabling moving of ovs ports to different name
 spaces, would be easy to do but it would break Neutron. The other
 option, making ovs to work across net name spaces, is hard and will
 require addition of different kernel APIs and large changes in ovs
 user space daemons. This constitutes tremendous amount of work.
 
 The tracker bug on openvswitch side is:
 - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
 
 So in the best case, we may expect openvswitch to properly support the
 feature in long term, but short term it won't work, especially while
 neutron expects other features implemented in openvswitch for it (like
 NAT, or connection tracking, or ipv6 tunnel endpoints, to name a few).
 
 We could try to handle the issue neutron side. We can fix it by
 utilizing veth pairs to 

Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Flavio Percoco

On 13/02/15 16:01 +0100, Jordan Pittier wrote:

What is the difference between just calling the Glance API to upload an image,

versus adding add() functionality to the HTTP image store?
You mean using glance image-create --location http://server1/myLinuxImage [..]
 ? If so, I guess adding the add() functionality will save the user from
having to find the right POST curl/wget command to properly upload his image.


I believe it's more complex than this. Having an `add` method for the
HTTP store implies:

- Figuring out the http method the server expects (POST/PUT)
- Adding support for at least few HTTP auth methods
- Having a sufixed URL where we're sure glance will have proper
 permissions to upload data.
- Handling HTTP responses from the server w.r.t the status of the data
 upload. For example: What happens if the remote http server runs out
 of space? What's the response status going to be like? How can we
 make glance agnostic to these discrepancies across HTTP servers so
 that it's consistent in its responses to glance users?
- How can we handle quota?

I'm not fully opposed, although it sounds like not worth it code-wise,
maintenance-wise and performance-wise. The user will have to run just
1 command but at the cost of all of the above.

Do the points listed above make sense to you?

Cheers,
Flavio



On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

   On 02/13/2015 09:47 AM, Jordan Pittier wrote:
  
   Hi list,


   I would like to add the 'add' capability to the HTTP glance store.

   Let's say I (as an operator or cloud admin) provide an HTTP server
   where
   (authenticated/trusted) users/clients can make the following HTTP
   request :

   POST http://server1/myLinuxImage HTTP/1.1
   Host: server1
   Content-Length: 25600
   Content-Type: application/octet-stream

   mybinarydata[..]

   Then the HTTP server will store the binary data, somewhere (for
   instance
   locally), some how (for instance in a plain file), so that the data is
   later on accessible by a simple GET http://server1/myLinuxImage

   In that case, this HTTP server could easily be a full fleshed Glance
   store.

   Questions :
   1) Has this been already discussed/proposed ? If so, could someone give
   me a pointer to this work ?
   2) Can I start working on this ? (the 2 main work items are : 'add an
   add method to glance_store._drivers.http.__Store' and 'add a delete
   method to glance_store._drivers.http.__Store (HTTP DELETE method)'


   What is the difference between just calling the Glance API to upload an
   image, versus adding add() functionality to the HTTP image store?

   Best,
   -jay

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpGzwquQzdzL.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-13 Thread Debojyoti Dutta
Tim

Wanted to clarify a bit. As I have mentioned before: Solver scheduler is
work done before this work (Datalog-constraints) but we had kept it
very generic to be integrated with something like congress. In fact Ramki
(who was one of the members of the original thread when you reached out to
us) joined us to in talk in Atlanta where we described some of the same use
cases using PULP  congress was still ramping up then. We were not aware
of the Datalog-constraints work that you guys were doing, else we would
have joined hands before.

The question is this: going forward, how do build this cool stuff together
in the community? I am hoping the scheduler folks will be very excited too!

debo

On Thu, Feb 12, 2015 at 11:27 AM, Yathiraj Udupi (yudupi) yud...@cisco.com
wrote:

  Hi Tim,

  Thanks for your response.  Excited too to extend the collaboration and
 ensure there is no need to duplicate effort in the open source community.
  My responses inline.

   1)  Choice of LP solver.

  I see solver-scheduler uses Pulp, which was on the Congress short list
 as well.  So we’re highly aligned on the choice of underlying solver.


  YATHI - This makes me wonder why can’t we easily adapt the
 solver-scheduler to your needs, rather than duplicating the effort!


  2) User control over VM-placement.


  To choose the criteria for VM-placement, the solver-scheduler user picks
 from a list of predefined options, e.g. ActiveHostConstraint,
 MaxRamAllocationPerHostConstraint.

  We’re investigating a slightly different approach, where the user
 defines the criteria for VM-placement by writing any policy they like in
 Datalog.  Under the hood we then convert that Datalog to an LP problem.
 From the developer’s perspective, with the Congress approach we don’t
 attempt to anticipate the different policies the user might want and write
 code for each policy; instead, we as developers write a translator from
 Datalog to LP.  From the user’s perspective, the difference is that if the
 option they want isn’t on the solver-scheduler's list, they’re out of luck
 or need to write the code themselves.  But with the Congress approach, they
 can write any VM-placement policy they like.

  What I’d like to see is the best of both worlds.  Users write Datalog
 policies describing whatever VM-placement policy they want.  If the policy
 they’ve written is on the solver-scheduler’s list of options, we use the
 hard-coded implementation, but if the policy isn’t on that list we
 translate directly to LP.  This approach gives us the ability to write
 custom code to handle common cases while at the same time letting users
 write whatever policy they like.


  YATHI -  The idea of providing some default constraint classes in Solver
 Scheduler was to enable easy pluggability for various placement policy
 scenarios.  We can easily add a custom constraint class in solver
 scheduler, that enables adding additional constraints at runtime (PulP
 model or any other models we can use and support).  It will just take in
 any external policy (say Datalog in Congress example), and it can easily
 add those set of resulting translated constraints via this custom
 constraint builder class.  This is something we can definitely add value to
 solver scheduler by implementing and adding here.


  3) API and architecture.

  Today the solver-scheduler's VM-placement policy is defined at
 config-time (i.e. not run-time).  Am I correct that this limitation is only
 because there’s no API call to set the solver-scheduler’s policy?  Or is
 there some other reason the policy is set at config-time?

  Congress policies change at runtime, so we’ll definitely need a
 VM-placement engine whose policy can be changed at run-time as well.

   YATHI -  We have working code to set VM placement policies at run-time
 to dynamically select the constraint or cost classes to use.   It is yet to
 upstreamed to solver scheduler stackforge repo, but will be soon.  But yeah
 I agree with you, this is definitely needed for any policy-driven VM
 placement engine, as the policies are dynamic. Short answer, yes solver
 scheduler has abilities to support this.


  If we focus on just migration (and not provisioning), we can build a
 VM-placement engine that sits outside of Nova that has an API call that
 allows us to set policy at runtime.  We can also set up that engine to get
 data updates that influence the policy.  We were planning on creating this
 kind of VM-placement engine within Congress as a node on the DSE (our
 message bus).  This is convenient because all nodes on the DSE run in their
 own thread, any node on the DSE can subscribe to any data from any other
 node (e.g. ceilometer’s data), and the algorithms for translating Datalog
 to LP look to be quite similar to the algorithms we’re using in our
 domain-agnostic policy engine.


  YATHI – The entire scheduling community in Nova is planning on an
 external scheduler (Gantt), and we are pitching solver scheduler also 

Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Flavio Percoco

On 13/02/15 11:06 +, Kuvaja, Erno wrote:

Hi all,

We have almost year old (from last update) reviews still in the queue for
glance. The discussion was initiated on yesterday’s meeting for adopting
abandon policy for stale changes.

The documentation can be found from https://etherpad.openstack.org/p/
glance-cleanout-of-inactive-PS and any input would be appreciated. For your
convenience current state below:



Thanks for putting this together. I missed the meeting yday and this
is important.


Glance - Cleanout of inactive change proposals from review


We Should start cleaning out our review list to keep the focus on changes that
has momentum. Nova is currently abandoning change proposals that has been
inactive for 4 weeks.



Proposed action (if all of the following is True, abandon the PS):

1. The PS has -1/-2 (including Jenkins)


I assume you're talking about voting -1/-2 and not Workflow, right?
(you said jenkins afterall but just for the sake of clarity).


2. The change is proposed to glance, glance_store or python-glanceclient;
   specs should not be abandoned as their workflow is much slower

3. No activity for 28 days from Author/Owner after the -1/-2


I'd reword this in No activity. This includes comments, feedback,
discussions and or other committers taking over a patch.


4. There has been  query made to the owner to update the patch between 5 and
   10 days  before abandoning (comment on PS/Bug or something similar)

 ● Let's be smart on this. Flexibility is good on holiday seasons, during
   feature freeze, etc.


+2 to the above, I like it.

Thanks again,
Flavio

--
@flaper87
Flavio Percoco


pgp4MX9MUQ9fE.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-13 Thread Salvatore Orlando
On 13 February 2015 at 12:40, Rossella Sblendido rsblend...@suse.com
wrote:



 On 02/12/2015 02:36 PM, Salvatore Orlando wrote:
  - I promised a non blocking algorithm for IP allocation. The one I was
  developing was based on specifying the primary key on the ip_requests
  table in a way that it would prevent two concurrent requests from
  getting the same address, and would just retry getting an address until
  the primary key constraint was satisfied. However, recent information
  emerged on MySQL galera's (*) data set [2] certification  clarified that
  this kind of algorithm would still result in a deadlock error from
  failed data set certification. It is worth noting that in this case a
  solution based on traditional compare-and-swap is not possible because
  concurrent requests would be inserting data at the same time. I am now
  working on an alternative solution, and I would like to first implement
  a PoC for it (so that I can prove it works).

 Would something like the following work: insert the data in the DB, if
 any error is got open a new transaction and try again ? enikanorov
 proposed a retry mechanism here [1] . Can't wait for your POC! I had
 been playing a while in the past to try to remove the locking from the
 IP allocation, it's hard!


Retry is always an option, however the mechanism with the separate driver
would be a bit different, since we need to identify conflicts before
storing the IP allocation entry.
As a matter of fact, we're indeed splitting the IPAM transaction from the
transaction which creates or updates the port.
The patch you linked is a mechanism for retrying a database transaction
which can be adopted in any case, and is perhaps worth adopting also in the
IPAM case - if nothing else to avoid code duplication.
However, what I am aiming for is a lock-free and wait-free algorithm that
does not make assumptions on the isolation level. If that would not be
practical, there are several alternatives to be considered:
- Leveraging unique constraint violations, and then trying different IPs
until the constraint is not satisfied. This is apparently easy, but
availability ranges can be quite tricky when you remove locking.
- Trying whether ti would be possible do so some sort of compare and swap
on IP availability ranges themselves. I think you were already developing
something like that in [1]
- Considering an alternative to availability ranges. Pre-generation of IP
entries is unpractical (think IPv6), so that's not an option.
Unfortunately, I have not yet explored in detail this route.

Salvatore


[1]
https://review.openstack.org/#/c/100963/32/neutron/db/db_base_plugin_v2.py


 cheers,

 Rossella


 [1] https://review.openstack.org/#/c/149261/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Etherpad for volume replication created ...

2015-02-13 Thread Steven Kaufer

Erlon Cruz sombra...@gmail.com wrote on 02/13/2015 07:51:34 AM:

 From: Erlon Cruz sombra...@gmail.com
 To: Danny Al-Gaaf danny.al-g...@bisect.de, OpenStack Development
 Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
 Date: 02/13/2015 07:53 AM
 Subject: Re: [openstack-dev] [cinder] Etherpad for volume
 replication created ...

 Do you have the log of the discussion as well?

Erlon,

The discussion is logged at [1], starting at 2015-02-12T18:32:40 and ending
around 2015-02-12T19:53:20.

[1]:
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-02-12.log

Thanks,
Steven Kaufer


 On Fri, Feb 13, 2015 at 7:09 AM, Danny Al-Gaaf danny.al-g...@bisect.de
  wrote:
 Hi Jay,

 do you have a link to the etherpad?

 Danny

 Am 13.02.2015 um 05:54 schrieb Jay S. Bryant:
  All,
 
  Several members of the Cinder team and I were discussing the
  current state of volume replication while trying to figure out the
  best way to resolve bug 1383524 [1].  The outcome of the discussion
  was a decision to hold off on integrating volume replication
  support for additional drivers.
 
  I took notes from the discussion and have put them in the etherpad.
  We can use that, first thing in L, as a starting point to rework
  and fix replication support.
 
  Please let me know if you have any questions and feel free to
  update the etherpad with addition thoughts.
 
  Thanks! Jay
 
 
  [1] https://bugs.launchpad.net/cinder/+bug/1383524--  Periodic
  update replication status causing issues
 
 
__
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-13 Thread Thierry Carrez
Angus Lees wrote:
 So inspired by the Rootwrap on root-intensive nodes thread, I went and
 wrote a proof-of-concept privsep daemon for
 neutron: https://review.openstack.org/#/c/155631

Nice work! Trying to check where the security model is actually weaker
than the one provided by rootwrap here...

If I read this correctly, one of the drawbacks compared to the rootwrap
approach is that all the functions are accessible on any node running
the privsep daemon.

When properly packaged, the rootwrap system only has the relevant filter
definition files present, which dramatically reduces what you allow to
run as root there. For example, nova-metadata nodes can only run
iptables to add a firewall rule, since that's the only filter definition
file shipped on those nodes. That's a benefit of shipping allowed
commands in the configuration, rather than in the code. With the
proposed system, the nova-metadata node running with nova-privsep would
have access to any function defined there, including the rather
permissive file manipulations that nova-compute needs.

Maybe one way to solve that would be to have several different privsep
daemons in nova to run on the various types of nodes, each with their
own set of allowed functions...

Additionally, rootwrap was extremely lean in terms of dependencies --
basically only allowing Python stdlib module imports to reduce the
envelope of code you need to trust to run things as root. Here you are
reusing the Neutron agent framework, which has a pretty long list of
module imports, which are all imported before the rights are dropped.
That means you end up blindly trusting a widely larger envelope of modules.

My suggestion would be to not run it as an agent but as a standalone
daemon with an aggressively-limited set of module imports (ideally only
stdlib).

That's all I could spot with a cursory look :)

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [API] Do we need to specify follow the HTTP RFCs?

2015-02-13 Thread michael mccune

On 02/12/2015 02:20 PM, Ryan Brown wrote:

+1 I think the way to go would be:

We suggest (pretty please) that you comply with RFCs 7230-5 and if you
have any questions ask us. Also here are some examples of usage that
is/isn't RFC compliant for clarity



+1, i like the idea of pointing readers towards the RFCs but also 
providing examples.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jay Pipes

On 02/13/2015 09:47 AM, Jordan Pittier wrote:

Hi list,

I would like to add the 'add' capability to the HTTP glance store.

Let's say I (as an operator or cloud admin) provide an HTTP server where
(authenticated/trusted) users/clients can make the following HTTP request :

POST http://server1/myLinuxImage HTTP/1.1
Host: server1
Content-Length: 25600
Content-Type: application/octet-stream

mybinarydata[..]

Then the HTTP server will store the binary data, somewhere (for instance
locally), some how (for instance in a plain file), so that the data is
later on accessible by a simple GET http://server1/myLinuxImage

In that case, this HTTP server could easily be a full fleshed Glance store.

Questions :
1) Has this been already discussed/proposed ? If so, could someone give
me a pointer to this work ?
2) Can I start working on this ? (the 2 main work items are : 'add an
add method to glance_store._drivers.http.__Store' and 'add a delete
method to glance_store._drivers.http.__Store (HTTP DELETE method)'


What is the difference between just calling the Glance API to upload an 
image, versus adding add() functionality to the HTTP image store?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-13 Thread Guo, Ruijing
In short term, we use veth pairs with namespace to fix the issue if performance 
is not impacted (Hopefully:)

If performance downgrade too much, we may consider the following:

1) DHCP agent: use veth pairs with namespace since it is not critical path.

2) L3 agent: don't create port in OSV.  Connect L3 agent without open switch

VM --- Linux Bridge --- open switch -- (int-br-eth, phy-br-eth) --- 
physical switch  vlan interface with namespace --- L3agent

In long term, we may implement namespace and remove linux bridge.

-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com] 
Sent: Friday, February 13, 2015 8:15 PM
To: openstack-dev
Cc: Jiri Benc; jpi...@redhat.com
Subject: [openstack-dev] [neutron] moving openvswitch ports between namespaces 
considered harmful

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi neutroners,

we** had several conversations recently with our Red Hat fellows who work on 
openvswitch (Jiri Benc and Jiri Pirko) regarding the way neutron utilizes their 
software. Those were beneficial to both sides to understand what we do right 
and wrong. I was asked to share some of the points from those discussions with 
broader community.

===

One of the issues that came up during discussions is the way neutron connects 
ovs ports to namespaces. The short story is that openvswitch is not designed 
with namespaces in mind, and the fact that moving its ports into a different 
namespace works for neutron is mere coincidence, and is actually considered as 
a bug by openvswitch guys.

It's not just broken in theory from software design standpoint, but also in 
practice. Specifically,

1. ovsdb dump does not work properly for namespaces:
- - https://bugzilla.redhat.com/show_bug.cgi?id=1160340

This results in wrong statistics and other data collected for these ports;

2. We suspect that the following kernel crash is triggered because of our usage 
of the feature that is actually a bug:
- - https://bugs.launchpad.net/neutron/+bug/1418097

Quoting Jiri Benc,

The problem is openvswitch does not support its ports to be moved to a 
different name space. The fact that it's possible to do that is a bug - such 
operation should have been denied. Unfortunately, due to a missing check, it's 
not been denied. Such setup does not work reliably, though, and leads to 
various issues from incorrect resource accounting to kernel crashes.

We're aware of the bug but we don't have any easy way to fix it. The most 
obvious option, disabling moving of ovs ports to different name spaces, would 
be easy to do but it would break Neutron. The other option, making ovs to work 
across net name spaces, is hard and will require addition of different kernel 
APIs and large changes in ovs user space daemons. This constitutes tremendous 
amount of work.

The tracker bug on openvswitch side is:
- - https://bugzilla.redhat.com/show_bug.cgi?id=1160340

So in the best case, we may expect openvswitch to properly support the feature 
in long term, but short term it won't work, especially while neutron expects 
other features implemented in openvswitch for it (like NAT, or connection 
tracking, or ipv6 tunnel endpoints, to name a few).

We could try to handle the issue neutron side. We can fix it by utilizing veth 
pairs to get into namespaces, but it may mean worse performance, and will 
definitely require proper benchmarking to see whether we can live with 
performance drop.

===

There were other suggestions on how we can enhance our way of usage of 
openvswitch. Among those, getting rid of linux bridge used for security groups, 
with special attention on getting rid of ebtables
(sic!) for they are a lot slower than iptables; getting rid of veth pair for 
instance ports.

===

I really encourage everyone to check the following video from devconf.cz 2015 
on all that and more at:

- - https://www.youtube.com/watch?v=BlLD-qh9EBQ

Among other things, you will find presentation of plotnetcfg tool to create 
nice graphs of openstack state.

If you're lazy enough and want to switch directly to the analysis of neutron 
problems, skip to ~18:00.

I also encourage to check our the video around 30:00 on the way out of 
openvswitch for neutron (tc/eBPF). As crazy as encouraging.

===

While at it, I encourage everyone interested in controlling switches the open 
source way to check out presentation of the new kernel subsystem for that (I 
guess vaporware atm):

- - https://www.youtube.com/watch?v=awekaJ7xWuw

===

So, that's what I have for you now. I really want to hear from everyone what is 
our plan to solve the problem neutron side.

Comments?

/Ihar

**Jakub Libosvar and me
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU3eq+AAoJEC5aWaUY1u57uyIH/2MRnU7Xr2ivfzDsqg1T1djN
WgE6j87hVyIUnw/p/+vD1eDpOURPmZUcE/S7B6SCVv5KNB+j0pr22os5JM0cjCox
zt63xz4GR/LibiJhyPnWtmSOqYdGFeTIdOj2TvovvOqtmI4MRmHoZy4fwShq0jXd
RX00Z/o2DCxB+0KfJYQiWbFgXO43/zrdNGe9ME3XWI5TvVXQx49DMwv5K1jYN45Q

Re: [openstack-dev] [horizon] Stepping down as a Horizon core reviewer

2015-02-13 Thread David Lyle
On Fri, Feb 13, 2015 at 6:14 AM, Thierry Carrez thie...@openstack.org
wrote:

 Julie Pichon wrote:
  In the spirit of stepping down considerately [1], I'd like to ask to be
  removed from the core and drivers team for Horizon and associated
  projects. I'm embarking on some fun adventures far far away and won't
  have any time to spare for OpenStack for a while.

 Aw. Sad to hear that. Please come back to us when said adventures start
 to become unfun!

 --
 Thierry Carrez (ttx)


Thank you Julie for all of your contributions. You've been an integral part
of the Horizon team. We will miss you.

We'll always have room for you, if you ever want to take us back.

Best wishes on your next adventures.

David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Shell Action, Re: Running HBase Jobs (was: About Sahara Oozie plan)

2015-02-13 Thread michael mccune

On 02/12/2015 05:15 PM, Trevor McKay wrote:

Hi folks,

Here is another way to do this.  Lu had mentioned Oozie shell actions
previously.
Sahara doesn't support them, but I played with it from the Oozie command
line
to verify that it solves our hbase problem, too.

We can potentially create a blueprint to build a simple Shell action
around a
user-supplied script and supporting files.  The script and files would
all be stored
in Sahara as job binaries (Swift or internal db) and referenced the same
way. The exec
target must be on the path at runtime, or included in the working dir.

To do this, I simply put workflow.xml, doit.sh, and the test jar into
a directory in hdfs.  Then I ran it with the Oozie cli using the job.xml
config file
configured to point at the hdfs dir.  Nothing special here, just
standard Oozie
job execution.



very cool Trevor, i wonder if there is a greater pattern here we could 
identify. namely, the idea that a user could upload multiple job 
binaries and create some sort of nesting structure to the way they are 
interpreted. maybe there could be a standard substitution method for 
script like files. in this respect a user could create a standard 
wrapper script and allow different binary names to be substituted into 
the script. this may be too complicated but it occurred to me while 
reading your results.


regardless of the greater pattern, this is a good window into more ways 
for us to control the command arguments.


mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-13 Thread Jay Pipes

On 02/12/2015 09:59 PM, Robert Collins wrote:

On 5 February 2015 at 13:20, Rochelle Grober rochelle.gro...@huawei.com wrote:

Duncan Thomas [mailto:duncan.tho...@gmail.com] on Wednesday, February 04,
2015 8:34 AM wrote:



The downside of numbers rather than camel-case text is that they are less
likely to stick in the memory of regular users. Not a huge think, but a
reduction in usability, I think. On the other hand they might lead to less
guessing about the error with insufficient info, I suppose.

To make the global registry easier, we can just use a per-service prefix,
and then keep the error catalogue in the service code repo, pulling them
into some sort of release periodically



[Rockyg]  In discussions at the summit about assigning error codes, we
determined it would be pretty straightforward to build a tool that could be
called when a new code was needed and it would both assign an unused code
and insert the error summary for the code in the DB it would keep to ensure
uniqueness.  If you didn’t provide a summary, it wouldn’t spit out an error
code;-)  Simple little tool that could be in oslo, or some cross-project
code location.


Apropos of logging, has https://tools.ietf.org/html/rfc5424 been
considered? Combined with https://tools.ietf.org/html/rfc5426 we'd
have a standards based (and thus already supported by logging and
analysis tools) framework. aka, we seem to be on the verge of
inventing a thing thats already been invented.


In what way are either of the above RFCs guidance to our conversation 
about how to properly codify error codes in our HTTP APIs?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/13/2015 01:47 PM, Miguel Ángel Ajo wrote:
 Sorry, I forgot about
 
 5)  If we put all our OVS/OF bridge logic in just one bridge
 (instead of N: br-tun, br-int, br-ex, br-xxx), the performance
 should be yet higher, since, as far as I understood, flow rule
 lookup could be more optimized into the kernel megaflows without
 forwarding and re-starting evaluation due to patch ports. (Please
 correct me here where I’m wrong, I just have very high level view
 of this).

Indeed, that was also mentioned by Jiri in our private talks. That
said, I'm as unaware of details here as you probably are (or more).

 
 Best, Miguel Ángel Ajo
 
 On Friday, 13 de February de 2015 at 13:42, Miguel Ángel Ajo
 wrote:
 
 Hi, Ihar  Jiri, thank you for pointing this out.
 
 I’m working on the following items:
 
 1) Doing Openflow traffic filtering (stateful firewall) based on 
 OVS+CT[1] patch, which may eventually merge. Here I want to build
 a good amount of benchmarks to be able to compare the current
 network iptables+LB solution to just openflow.
 
 Openflow filtering should be fast, as it’s quite smart at using 
 hashes to match OF rules in the kernel megaflows (thanks Jiri 
 T.Graf for explaining me this)
 
 The only bad part is that we would have to dynamically change 
 more rules based on security group changes (now we do that via ip
 sets without reloading all the rules).
 
 To do this properly, we may want to make the OVS plugin a real OF
 controller to be able to push OF rules to the bridge without the
 need of calling ovs-ofctl on the command line all the time.
 
 2) Using OVS+OF to do QoS
 
 other interesting stuff to look at:
 
 3) Doing routing in OF too, thanks to the NAT capabilities of
 having OVS+CT
 
 4) The namespace problem, what kinds of statistics get broken by 
 moving ports into namespaces now?, the short-term fix could be
 using vets, but “namespaceable” OVS ports would be perfect, yet I
 understand the change is a big feature.
 
 If we had 1  3, may be 4 wouldn’t be a problem anymore.
 
 [1] https://github.com/justinpettit/ovs/tree/conntrack
 
 Miguel Ángel Ajo
 
 On Friday, 13 de February de 2015 at 13:14, Ihar Hrachyshka
 wrote:
 
 Hi neutroners,
 
 we** had several conversations recently with our Red Hat fellows
 who work on openvswitch (Jiri Benc and Jiri Pirko) regarding the
 way neutron utilizes their software. Those were beneficial to both
 sides to understand what we do right and wrong. I was asked to
 share some of the points from those discussions with broader
 community.
 
 ===
 
 One of the issues that came up during discussions is the way
 neutron connects ovs ports to namespaces. The short story is that
 openvswitch is not designed with namespaces in mind, and the fact
 that moving its ports into a different namespace works for neutron
 is mere coincidence, and is actually considered as a bug by
 openvswitch guys.
 
 It's not just broken in theory from software design standpoint,
 but also in practice. Specifically,
 
 1. ovsdb dump does not work properly for namespaces: -
 https://bugzilla.redhat.com/show_bug.cgi?id=1160340
 
 This results in wrong statistics and other data collected for
 these ports;
 
 2. We suspect that the following kernel crash is triggered because
 of our usage of the feature that is actually a bug: -
 https://bugs.launchpad.net/neutron/+bug/1418097
 
 Quoting Jiri Benc,
 
 The problem is openvswitch does not support its ports to be moved
 to a different name space. The fact that it's possible to do that
 is a bug - such operation should have been denied. Unfortunately,
 due to a missing check, it's not been denied. Such setup does not
 work reliably, though, and leads to various issues from incorrect
 resource accounting to kernel crashes.
 
 We're aware of the bug but we don't have any easy way to fix it.
 The most obvious option, disabling moving of ovs ports to different
 name spaces, would be easy to do but it would break Neutron. The
 other option, making ovs to work across net name spaces, is hard
 and will require addition of different kernel APIs and large
 changes in ovs user space daemons. This constitutes tremendous
 amount of work.
 
 The tracker bug on openvswitch side is: -
 https://bugzilla.redhat.com/show_bug.cgi?id=1160340
 
 So in the best case, we may expect openvswitch to properly support
 the feature in long term, but short term it won't work, especially
 while neutron expects other features implemented in openvswitch for
 it (like NAT, or connection tracking, or ipv6 tunnel endpoints, to
 name a few).
 
 We could try to handle the issue neutron side. We can fix it by 
 utilizing veth pairs to get into namespaces, but it may mean worse 
 performance, and will definitely require proper benchmarking to
 see whether we can live with performance drop.
 
 ===
 
 There were other suggestions on how we can enhance our way of usage
 of openvswitch. Among those, getting rid of linux bridge used for 
 

Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-13 Thread Salvatore Orlando
On 12 February 2015 at 19:57, John Belamaric jbelama...@infoblox.com
wrote:



   From: Salvatore Orlando sorla...@nicira.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, February 12, 2015 at 8:36 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] Update on DB IPAM driver

   Hi,

  I have updated the patch; albeit not complete yet it's kind of closer to
 be an allocator decent enough to replace the built-in logic.

  I will be unable to attend today's L3/IPAM meeting due to a conflict, so
 here are some highlights from me on which your feedback is more than
 welcome:

  - I agree with Carl that the IPAM driver should not have explicit code
 paths for autoaddress subnets, such as DHCPv6 stateless ones. In that case,
 the consumer of the driver will generate the address and then to the IPAM
 driver that would just be allocation of a specific address. However, I have
 the impression the driver still needs to be aware of whether the subnet has
 an automatic address mode or not - since in this case 'any' address
 allocation won't be possible. There already comments about this in the
 review [1]


  I think the auto-generated case should be a base class as you described
 in [1], but each subclass would implement the specific auto-generation. See
 the discussion at line 468 in [2] and see what you think. Of course for
 addresses that come from RA there would be no IPAM.


I think this makes sense.



  [1] https://review.openstack.org/#/c/150485/
  [2]
 https://review.openstack.org/#/c/153236/2/neutron/db/db_base_plugin_v2.py,unified


  - We had a discussion last week on whether the IPAM driver and neutron
 should 'share' database tables. I went back and forth a lot of time, but
 now to me it seems the best thing to do is to have the IPAM driver maintain
 an 'ip_requests' tables, where it stores allocation info. This table
 partially duplicates data in IPAllocation, but on the plus side it makes
 the IPAM driver self sufficient. The next step would be to decide whether
 we want to go a step further and also assume the driver should not access
 at all Neutron's DB, but I would defer that discussion to the next
 iteration (for both the driver and the IPAM interface)

  - I promised a non blocking algorithm for IP allocation. The one I was
 developing was based on specifying the primary key on the ip_requests table
 in a way that it would prevent two concurrent requests from getting the
 same address, and would just retry getting an address until the primary key
 constraint was satisfied. However, recent information emerged on MySQL
 galera's (*) data set [2] certification  clarified that this kind of
 algorithm would still result in a deadlock error from failed data set
 certification. It is worth noting that in this case a solution based on
 traditional compare-and-swap is not possible because concurrent requests
 would be inserting data at the same time. I am now working on an
 alternative solution, and I would like to first implement a PoC for it (so
 that I can prove it works).

  - The db base refactoring being performed by Pavel is under way [3]. It
 is worth noting that this is a non-negligible change to some of Neutron's
 basic and more critical workflows. We should expect pushback from the
 community regarding the introduction of this change in the 3rd milestone.
 At this stage I would suggest either:
 A) consider a strategy for running pluggable IPAM as optional
 B) consider delaying to Liberty.
 (and that's where I get virtually jeered and pelted with rotten tomatoes)


  I wish I had some old tomatoes! Seriously, I think A is a reasonable
 approach. To make this really explicit we may want to basically replace the
 DB plugin class with a shim that delegates to either the current
 implementation or the new implementation, depending on the flag.


The fact that the current implementation is pretty much a bunch of private
methods in the db base plugin class executed within a transaction for
creating a port makes the situation a wee bit more complex. I'm not sure we
can replace the db plugin class with a shim so easily, because we need to
consider the impact on plugins which inherit from this base class. For
instance some plugins override methods from the base class, and this would
be a problem. For those plugins we must ensure old-style IPAM is performed.
A transitory solution might be to have, for the relevant methods 2 versions
- one would be the current one, and the other one would be the one
leveraging pluggable IPAM. During plugin initialisation, the plugin itself
will decide whether use or not the latter methods. This might be tuneable
with a configuration parameter too. The downside of this approach is that
it will not allow us to remove old baked in IPAM code, and will have an
impact on code maintainability which ultimately will result in accumulating
even 

[openstack-dev] GSoC2015: Its time for potential mentors and participants!

2015-02-13 Thread Debojyoti Dutta
Hello Everyone

It is time for us to apply for slots for the annual Google Summer of Code
event https://developers.google.com/open-source/soc/?csw=1

Last year, we got a bunch of slots and had awesome projects
https://wiki.openstack.org/wiki/GSoC2014
We are hoping this year we will get even more slots, uber cool projects etc.

If you are interested - either as a mentor or as a participant, please feel
free to add your name, project ideas to the wiki page for this yr
https://wiki.openstack.org/wiki/GSoC2015

-- 
OpenStack GSoC team
(Debo~, Dims, Victoria)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Alexander Tivelkov
Hi!

Important chagesets are supposed to have bugs (or blueprints) assigned
to them, so, even if the CS is abandoned, its description still
remains on Launchpad in one form or another, so we will not loose it
from general project's backlog. And if the changeset didn't have a
bug/blueprint specified, then it either does not represent a real use
case at all (or its owner didn't bother to document it anyway, so
keeping the changeset most probably will not help to understand the
use case)

So I like the proposal in general.

However, it has a little issue: imagine a patchset which receives a -1
from some random reviewer. The owner may reply to that -1 with a
reasonable counterargument, and in this situation it is up to the
initial reviewer to either agree with that counterargument and revoke
the -1 or to continue the discussion and persuade the owner to change
the code. However, I've seen situations when the reviewers do not
react to such replies and the changesets remain idle with a single -1
which are in fact addressed but not removed. It would be a bad
practice if we abandon such commits just because their initial
reviewers have lost any interest in continuing the discussion with
owners.

Probably we should keep such situations in mind when defining inactive.

--
Regards,
Alexander Tivelkov


On Fri, Feb 13, 2015 at 5:17 PM, Kuvaja, Erno kuv...@hp.com wrote:
 Hi Boris,



 Thanks for your input. I do like the idea of picking up the changes that
 have not been active. Do you have resources in mind to dedicate for this?



 My personal take is that if some piece of work has not been touched for a
 month, it’s probably not that important after all and the community should
 use the resources to do some work that has actual momentum. The changes
 itself will not disappear the owner is still able to revive it if felt that
 there is right time to continue it. The cleanup will just make it easier for
 people to focus on things that are actually moving. It also will make bug
 tracking bit easier when one will see on the bug report that the patch got
 abandoned due to inactivity and indicates that the owner of that bug might
 not be working on it after all.



 -  Erno



 From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
 Sent: Friday, February 13, 2015 1:25 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals
 from review



 Hi,



 I believe that keeping review queue clean is the great idea.

 But I am not sure that set of these rules is enough to abandon patches.



 Recently I wrote blogpost related to making OpenStack community more user
 friendly:

 http://boris-42.me/thoughts-on-making-openstack-community-more-user-friendly/



 tl;dr;



 Patches on review are great source of information what is missing in
 project.

 Removing them from queue means losing this essential information. The result

 of such actions is that project doesn't face users requirements which is
 quite bad...



 What if that project team continue work on all abandoned patches  that are
 covering

 valid use cases and finish them?



 Best regards,

 Boris Pavlovic







 On Fri, Feb 13, 2015 at 3:52 PM, Flavio Percoco fla...@redhat.com wrote:

 On 13/02/15 11:06 +, Kuvaja, Erno wrote:

 Hi all,

 We have almost year old (from last update) reviews still in the queue for
 glance. The discussion was initiated on yesterday’s meeting for adopting
 abandon policy for stale changes.

 The documentation can be found from https://etherpad.openstack.org/p/
 glance-cleanout-of-inactive-PS and any input would be appreciated. For your
 convenience current state below:


 Thanks for putting this together. I missed the meeting yday and this
 is important.

 Glance - Cleanout of inactive change proposals from review


 We Should start cleaning out our review list to keep the focus on changes
 that
 has momentum. Nova is currently abandoning change proposals that has been
 inactive for 4 weeks.



 Proposed action (if all of the following is True, abandon the PS):

 1. The PS has -1/-2 (including Jenkins)


 I assume you're talking about voting -1/-2 and not Workflow, right?
 (you said jenkins afterall but just for the sake of clarity).

 2. The change is proposed to glance, glance_store or python-glanceclient;
specs should not be abandoned as their workflow is much slower

 3. No activity for 28 days from Author/Owner after the -1/-2


 I'd reword this in No activity. This includes comments, feedback,
 discussions and or other committers taking over a patch.

 4. There has been  query made to the owner to update the patch between 5 and
10 days  before abandoning (comment on PS/Bug or something similar)

  ● Let's be smart on this. Flexibility is good on holiday seasons, during
feature freeze, etc.


 +2 to the above, I like it.

 Thanks again,
 Flavio

 --
 @flaper87
 Flavio Percoco

 

Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-13 Thread Jay Pipes

On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:

On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:

On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:

Historically Nova has had a bunch of code which mounted images on the
host OS using qemu-nbd before passing them to libvirt to setup the
LXC container. Since 1.0.6, libvirt is able todo this itself and it
would simplify the codepaths in Nova if we can rely on that

In general, without use of user namespaces, LXC can't really be
considered secure in OpenStack, and this already requires libvirt
version 1.1.1 and Nova Juno release.

As such I'd be surprised if anyone is running OpenStack with libvirt
 LXC in production on libvirt  1.1.1 as it would be pretty insecure,
but stranger things have happened.

The general libvirt min requirement for LXC, QEMU and KVM currently
is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
but feel it is worth increasing the LXC min libvirt to 1.0.6

So would anyone object if we increased min libvirt to 1.0.6 when
running the LXC driver ?


Why not 1.1.1?


Well I was only going for what's the technical bare minimum to get
the functionality wrt disk image mounting.

If we wish to declare use of user namespace is mandatory with the
libvirt LXC driver, then picking 1.1.1 would be fine too.


Personally, I'd be +1 on 1.1.1. :)

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
What is the difference between just calling the Glance API to upload an
image, versus adding add() functionality to the HTTP image store?
You mean using glance image-create --location http://server1/myLinuxImage
[..] ? If so, I guess adding the add() functionality will save the user
from having to find the right POST curl/wget command to properly upload his
image.

On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/13/2015 09:47 AM, Jordan Pittier wrote:

 Hi list,

 I would like to add the 'add' capability to the HTTP glance store.

 Let's say I (as an operator or cloud admin) provide an HTTP server where
 (authenticated/trusted) users/clients can make the following HTTP request
 :

 POST http://server1/myLinuxImage HTTP/1.1
 Host: server1
 Content-Length: 25600
 Content-Type: application/octet-stream

 mybinarydata[..]

 Then the HTTP server will store the binary data, somewhere (for instance
 locally), some how (for instance in a plain file), so that the data is
 later on accessible by a simple GET http://server1/myLinuxImage

 In that case, this HTTP server could easily be a full fleshed Glance
 store.

 Questions :
 1) Has this been already discussed/proposed ? If so, could someone give
 me a pointer to this work ?
 2) Can I start working on this ? (the 2 main work items are : 'add an
 add method to glance_store._drivers.http.__Store' and 'add a delete
 method to glance_store._drivers.http.__Store (HTTP DELETE method)'


 What is the difference between just calling the Glance API to upload an
 image, versus adding add() functionality to the HTTP image store?

 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security][neutron] SDN Security in OpenStack

2015-02-13 Thread Thierry Carrez
Adding tags on the subject line to attract the attention of the OSSG
(OpenStack Security group) which regroups people working on improving
the state of security in OpenStack in general.

Patrick Lismore wrote:
 Hi all,
 
 I am a software developer working at HP,  I do not work with OpenStack
 @HP though, only worked with it privately.  
 
 I am finishing up a Masters and for my dissertation research I am
 focusing on SDN Security. 
 
 I wanted to align my research to current SDN uses in the industry and
 was interested in researching the security of SDN in OpenStack.
 
 To help focus and narrow down my research topic it would be helpful to
 hear from the developers directly working on the Neutron project where
 they see areas of improvement from a security point of view or are there
 areas that need more research that would be helpful to the project.
 
 As part of the dissertation code will be written, tests will be run and
 the findings published.  If the topic is aligned to industry then that
 code may be useful.
 
 People working close to the project will have a better view of the code
 and capabilities or lack thereof and may see an opportunity here to have
 some new functionality researched and contributed over the next 12 weeks.  
 
 I tired asking the question on Ask OpenStack but it got swiftly closed.  
 
 https://ask.openstack.org/en/question/60777/what-are-the-biggest-security-challenges-in-openstack-neutron/
  
 
 If anyone has any comments, thoughts or feedback you can drop me a few
 lines to patricklism...@gmail.com mailto:patricklism...@gmail.com 
 
 best regards
 
 Patrick 


-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [defcore] Proposal for new openstack/defcore repository

2015-02-13 Thread Thierry Carrez
Chris Hoge wrote:
 We're proposing to host the repository at openstack/defcore, as this is work 
 being done by a board-backed committee with cross cutting concerns for all 
 OpenStack projects. All projects are owned by some parent organization within 
 the OpenStack community. One possiblility for ownership that we considered 
 was the Technical Committee, with precedent set by the ownership of the API 
 Working Group repository[2]. However, we felt that there is a need to allow 
 for projects that are owned by the Board, and are also proposing a new Board 
 ownership group.

Right. On the governance side, the following change would, I think,
straight things up and properly account for that repository:

https://review.openstack.org/#/c/155738/

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Stepping down as a Horizon core reviewer

2015-02-13 Thread Thierry Carrez
Julie Pichon wrote:
 In the spirit of stepping down considerately [1], I'd like to ask to be
 removed from the core and drivers team for Horizon and associated
 projects. I'm embarking on some fun adventures far far away and won't
 have any time to spare for OpenStack for a while.

Aw. Sad to hear that. Please come back to us when said adventures start
to become unfun!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Etherpad for volume replication created ...

2015-02-13 Thread Erlon Cruz
Do you have the log of the discussion as well?

On Fri, Feb 13, 2015 at 7:09 AM, Danny Al-Gaaf danny.al-g...@bisect.de
wrote:

 Hi Jay,

 do you have a link to the etherpad?

 Danny

 Am 13.02.2015 um 05:54 schrieb Jay S. Bryant:
  All,
 
  Several members of the Cinder team and I were discussing the
  current state of volume replication while trying to figure out the
  best way to resolve bug 1383524 [1].  The outcome of the discussion
  was a decision to hold off on integrating volume replication
  support for additional drivers.
 
  I took notes from the discussion and have put them in the etherpad.
  We can use that, first thing in L, as a starting point to rework
  and fix replication support.
 
  Please let me know if you have any questions and feel free to
  update the etherpad with addition thoughts.
 
  Thanks! Jay
 
 
  [1] https://bugs.launchpad.net/cinder/+bug/1383524--  Periodic
  update replication status causing issues
 
 
 __
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Ryu CI scheduled outage

2015-02-13 Thread Anita Kuno
On 02/13/2015 01:56 AM, YAMAMOTO Takashi wrote:
 Ryu/ofagent CI will be offline during this weekend.
 sorry for inconvenience.
 
 YAMAMOTO Takashi
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Again, the mailing list is not the right place for this kind of information.

Please update your third party systems wikipage.

The mailing list is busy enough as it is, please don't add spam.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
Hi Boris,

Thanks for your input. I do like the idea of picking up the changes that have 
not been active. Do you have resources in mind to dedicate for this?

My personal take is that if some piece of work has not been touched for a 
month, it’s probably not that important after all and the community should use 
the resources to do some work that has actual momentum. The changes itself will 
not disappear the owner is still able to revive it if felt that there is right 
time to continue it. The cleanup will just make it easier for people to focus 
on things that are actually moving. It also will make bug tracking bit easier 
when one will see on the bug report that the patch got abandoned due to 
inactivity and indicates that the owner of that bug might not be working on it 
after all.


-  Erno

From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: Friday, February 13, 2015 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals 
from review

Hi,

I believe that keeping review queue clean is the great idea.
But I am not sure that set of these rules is enough to abandon patches.

Recently I wrote blogpost related to making OpenStack community more user 
friendly:
http://boris-42.me/thoughts-on-making-openstack-community-more-user-friendly/

tl;dr;

Patches on review are great source of information what is missing in project.
Removing them from queue means losing this essential information. The result
of such actions is that project doesn't face users requirements which is quite 
bad...

What if that project team continue work on all abandoned patches  that are 
covering
valid use cases and finish them?

Best regards,
Boris Pavlovic



On Fri, Feb 13, 2015 at 3:52 PM, Flavio Percoco 
fla...@redhat.commailto:fla...@redhat.com wrote:
On 13/02/15 11:06 +, Kuvaja, Erno wrote:
Hi all,

We have almost year old (from last update) reviews still in the queue for
glance. The discussion was initiated on yesterday’s meeting for adopting
abandon policy for stale changes.

The documentation can be found from https://etherpad.openstack.org/p/
glance-cleanout-of-inactive-PS and any input would be appreciated. For your
convenience current state below:

Thanks for putting this together. I missed the meeting yday and this
is important.
Glance - Cleanout of inactive change proposals from review


We Should start cleaning out our review list to keep the focus on changes that
has momentum. Nova is currently abandoning change proposals that has been
inactive for 4 weeks.



Proposed action (if all of the following is True, abandon the PS):

1. The PS has -1/-2 (including Jenkins)

I assume you're talking about voting -1/-2 and not Workflow, right?
(you said jenkins afterall but just for the sake of clarity).
2. The change is proposed to glance, glance_store or python-glanceclient;
   specs should not be abandoned as their workflow is much slower

3. No activity for 28 days from Author/Owner after the -1/-2

I'd reword this in No activity. This includes comments, feedback,
discussions and or other committers taking over a patch.
4. There has been  query made to the owner to update the patch between 5 and
   10 days  before abandoning (comment on PS/Bug or something similar)

 ● Let's be smart on this. Flexibility is good on holiday seasons, during
   feature freeze, etc.

+2 to the above, I like it.

Thanks again,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-13 Thread Thierry Carrez
Stefano Maffulli wrote:
 And so far, no real indication of why IRC is worse than a private
 phone call or a water-cooler conversation on a regular basis. 
 
 Multiple people have explained why already and you're choosing to ignore
 their words: permanent private IRC channels are a bad habit that
 reinforces a bad, anti-social behavior. When people develop the habit of
 hanging out separately from the rest, aristocracies start to emerge.
 That's bad for an open and democratic meritocracy like OpenStack.

Right. The danger of a permanent private channel is that, when one is
readily available, participants will end up having most of their
discussions there. And when they do, it fragments your community between
those with access and those without. We don't have elite committers in
OpenStack, everyone produces code and everyone reviews code. That's a
critical part of how we do development.

The pain of setting up a private channel when necessary to solve
exceptional issues ensures that it stays exceptional. The fact that it's
not permanent makes sure you don't fall into the trap of discussing
something there that should just be discussed on a public channel instead.

Because as I said elsewhere in this thread, it's only human nature, when
you have the choice between a channel where only your friends are, and a
channel where anyone could listen, you'll naturally prefer starting
discussions in the restricted channel. It takes a significant amount of
effort on all participants to just use this convenient and permanent
channel for the exceptional topics that may benefit from extra privacy.
And that effort is getting bigger as long as the channel survives.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
Hi list,

I would like to add the 'add' capability to the HTTP glance store.

Let's say I (as an operator or cloud admin) provide an HTTP server where
(authenticated/trusted) users/clients can make the following HTTP request :

POST http://server1/myLinuxImage HTTP/1.1
Host: server1
Content-Length: 25600
Content-Type: application/octet-stream

mybinarydata[..]

Then the HTTP server will store the binary data, somewhere (for instance
locally), some how (for instance in a plain file), so that the data is
later on accessible by a simple GET http://server1/myLinuxImage

In that case, this HTTP server could easily be a full fleshed Glance store.

Questions :
1) Has this been already discussed/proposed ? If so, could someone give me
a pointer to this work ?
2) Can I start working on this ? (the 2 main work items are : 'add an add
method to glance_store._drivers.http.Store' and 'add a delete method to
glance_store._drivers.http.Store (HTTP DELETE method)'

What do you think ?
Thanks,
Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-13 Thread Vladimir Kuklin
+1 to Andrew

This is actually what we want to do with SSL keys.

On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that can
 instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously should
 be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers
 for tasks - taking data from 3rd party sources if user wants. In this 
 case
 user will be able to generate some data somewhere and fetch it using this
 code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty uncomfortable 
 for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder
 /etc/fuel/keys, and then copy them with rsync task (but it feels 
 not very
 secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute
 on target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from
 file on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and 
 i guess
 it 

Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-13 Thread Miguel Ángel Ajo
Sorry, I forgot about   

5)  If we put all our OVS/OF bridge logic in just one bridge (instead of N: 
br-tun, br-int, br-ex, br-xxx),
 the performance should be yet higher, since, as far as I understood, flow 
rule lookup could be more
 optimized into the kernel megaflows without forwarding and re-starting 
evaluation due to patch ports.
 (Please correct me here where I’m wrong, I just have very high level view 
of this).

Best,
Miguel Ángel Ajo


On Friday, 13 de February de 2015 at 13:42, Miguel Ángel Ajo wrote:

 Hi, Ihar  Jiri, thank you for pointing this out.
  
 I’m working on the following items:
  
 1) Doing Openflow traffic filtering (stateful firewall) based on OVS+CT[1] 
 patch, which may
 eventually merge. Here I want to build a good amount of benchmarks to be 
 able to compare
 the current network iptables+LB solution to just openflow.
  
  Openflow filtering should be fast, as it’s quite smart at using hashes 
 to match OF rules
  in the kernel megaflows (thanks Jiri  T.Graf for explaining me this)
 
  The only bad part is that we would have to dynamically change more rules 
 based on security
 group changes (now we do that via ip sets without reloading all the rules).
  
   To do this properly, we may want to make the OVS plugin a real OF 
 controller to be able to
 push OF rules to the bridge without the need of calling ovs-ofctl on the 
 command line all the time.
  
 2) Using OVS+OF to do QoS
  
 other interesting stuff to look at:
  
 3) Doing routing in OF too, thanks to the NAT capabilities of having OVS+CT  
  
 4) The namespace problem, what kinds of statistics get broken by moving ports 
 into namespaces now?,
 the short-term fix could be using vets, but “namespaceable” OVS ports 
 would be perfect, yet I understand
 the change is a big feature.
  
 If we had 1  3, may be 4 wouldn’t be a problem anymore.
  
 [1] https://github.com/justinpettit/ovs/tree/conntrack  
  
 Miguel Ángel Ajo
  
  
 On Friday, 13 de February de 2015 at 13:14, Ihar Hrachyshka wrote:
  
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
   
  Hi neutroners,
   
  we** had several conversations recently with our Red Hat fellows who
  work on openvswitch (Jiri Benc and Jiri Pirko) regarding the way
  neutron utilizes their software. Those were beneficial to both sides
  to understand what we do right and wrong. I was asked to share some of
  the points from those discussions with broader community.
   
  ===
   
  One of the issues that came up during discussions is the way neutron
  connects ovs ports to namespaces. The short story is that openvswitch
  is not designed with namespaces in mind, and the fact that moving its
  ports into a different namespace works for neutron is mere
  coincidence, and is actually considered as a bug by openvswitch guys.
   
  It's not just broken in theory from software design standpoint, but
  also in practice. Specifically,
   
  1. ovsdb dump does not work properly for namespaces:
  - - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
   
  This results in wrong statistics and other data collected for these ports;
   
  2. We suspect that the following kernel crash is triggered because of
  our usage of the feature that is actually a bug:
  - - https://bugs.launchpad.net/neutron/+bug/1418097
   
  Quoting Jiri Benc,
   
  The problem is openvswitch does not support its ports to be moved to
  a different name space. The fact that it's possible to do that is a
  bug - such operation should have been denied. Unfortunately, due to a
  missing check, it's not been denied. Such setup does not work
  reliably, though, and leads to various issues from incorrect resource
  accounting to kernel crashes.
   
  We're aware of the bug but we don't have any easy way to fix it. The
  most obvious option, disabling moving of ovs ports to different name
  spaces, would be easy to do but it would break Neutron. The other
  option, making ovs to work across net name spaces, is hard and will
  require addition of different kernel APIs and large changes in ovs
  user space daemons. This constitutes tremendous amount of work.
   
  The tracker bug on openvswitch side is:
  - - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
   
  So in the best case, we may expect openvswitch to properly support the
  feature in long term, but short term it won't work, especially while
  neutron expects other features implemented in openvswitch for it (like
  NAT, or connection tracking, or ipv6 tunnel endpoints, to name a few).
   
  We could try to handle the issue neutron side. We can fix it by
  utilizing veth pairs to get into namespaces, but it may mean worse
  performance, and will definitely require proper benchmarking to see
  whether we can live with performance drop.
   
  ===
   
  There were other suggestions on how we can enhance our way of usage of
  openvswitch. Among those, getting rid of linux bridge used for
  security groups, with 

Re: [openstack-dev] What should openstack-specs review approval rules be ?

2015-02-13 Thread Thierry Carrez
James E. Blair wrote:
 [...] 
 I think in general though, it boils down to the fact that we need to
 answer these questions for each of the repos:
 
 A) Should the broader community register ±1 or simply comments? (Now
that we may distinguish them from TC member votes.)
 B) Should individual TC members get a veto?
 
 I personally think the answer to A is votes and B is no in both
 cases.  I'm also okay with sticking with comments for the governance
 repo.  I fell pretty strongly about not having veto.

I'd answer the same. Votes and no veto.

 [...]
 ==
 
 Since upgrading to Gerrit 2.8, we have some additional tools at our
 disposal for configuring voting categories.  For some unique
 repositories such as governance and cross-project specs, we may want
 to reconfigure voting there.
 
 Governance Repo Requirements
 
 
 I believe that the following are requirements for the Governance
 repository:
 
 * TC members can express approval or disapproval in a way that
   identifies their vote as a vote of a member of the TC.
 * TC members may not veto.
 * Anyone should be able to express their opinion.
 * Only the TC chair may approve the change.  This is so that the chair
   is responsible for the procedural aspects of the vote (ie, when it
   is finalized).
 
 Current Governance Repo Rules
 -
 
 These are currently satisfied by the following rules in Gerrit:
 
 * Anyone may comment on a change without leaving a vote.
 * Only TC members may vote +1/-1 in Code-Review.
 * Only the TC chair may vote Code-Review +2 and Workflow +1.
 
 Unsatisfied Governance Repo Requirements
 
 
 This does not satisfy the following potential requirements:
 
 * The TC chair may vote -1 and still approve a disputed change with 7
   yes votes (the chair currently would need to leave a comment
   indicating the actual vote tally).
 * Non-TC members may register their approval or disapproval with a
   vote (they currently may only leave comments to that effect).

Agreed.

 Cross-Project Repo Requirements
 ---
 
 * TC members can express approval or disapproval in a way that
   identifies their vote as a vote of a member of the TC.
 * TC members may not veto.  (This requirement has not achieved
   consensus.)
 * Non-TC members may register their approval or disapproval with a
   vote (we must be able to easily see that PTLs of affected projects
   have weighed in).
 * Only the TC chair may approve the change.  This is so that the chair
   is responsible for the procedural aspects of the vote (ie, when it
   is finalized).
 
 Current Cross-Project Repo Rules
 
 
 These are currently satisfied by the following rules in Gerrit:
 
 * Anyone may comment on a change and leave a vote.
 * Only TC members may vote +2 in Code-Review.
 * Only the TC chair may vote Workflow +1.

My understanding is that currently, any TC member can Workflow+1 (which
lead to the accidental approval of the previous spec).

 Unsatisfied Governance Repo Requirements
 
 The following potential requirements are not satisfied:
 
 * TC members may veto with a -2 Code-Review vote.  (This requirement
   has not achieved consensus.)
 
 Potential Changes
 =
 
 To address the unsatisfied requirements, we could make the following
 changes, which would only apply to the repos in question:
 
 To address this requirement:
 * The TC chair may vote -1 and still approve a disputed change with 7
   yes votes (the chair currently would need to leave a comment
   indicating the actual vote tally).
 
 We could change the Code-Review label function from MaxWithBlock to
 NoBlock, so that the votes in Code-Review are ignored by Gerrit, and
 only enforced by the chair.
 
 Additionally, we could write a custom submit rule that requires at
 least 7 +1 votes in order for the change to be submittable.

Our voting rules are slightly more complex than that, as you can see here:

http://git.openstack.org/cgit/openstack/governance/tree/reference/charter.rst#n77

The rule actually is more positive votes than negative votes, and a
minimum of 5 positive votes. The 7 YES rule is just a shortcut: once
we reach it, we can safely approve (otherwise we basically have to wait
for all votes to be cast, which with asynchronous voting, and the
difficulty to distinguish +0 from not voted yet, is not so easy to
achieve).

So unless you can encode that in a rule, I'd keep it under the
responsibility of the chair to properly (and publicly) tally the votes
(which I do in the +2 final comment) according to our charter rules.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Flavio Percoco

On 13/02/15 14:17 +, Kuvaja, Erno wrote:

Hi Boris,



Thanks for your input. I do like the idea of picking up the changes that have
not been active. Do you have resources in mind to dedicate for this?



My personal take is that if some piece of work has not been touched for a
month, it’s probably not that important after all and the community should use
the resources to do some work that has actual momentum. The changes itself will
not disappear the owner is still able to revive it if felt that there is right
time to continue it. The cleanup will just make it easier for people to focus
on things that are actually moving. It also will make bug tracking bit easier
when one will see on the bug report that the patch got abandoned due to
inactivity and indicates that the owner of that bug might not be working on it
after all.


I agree the above holds most of the times. However, I think we should
add one more step to the bullets you mentioned in your previous email.
That is, taking a good look to the review and understanding if it'd be
worth taking it over.

Some reviews are stalled on minor fixes/rebases. It'd be a shame to
abandon a patch that would be a good fix for a bug based on a missing
rebase.

Flavio





-  Erno



From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: Friday, February 13, 2015 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals
from review



Hi,



I believe that keeping review queue clean is the great idea. 


But I am not sure that set of these rules is enough to abandon patches.



Recently I wrote blogpost related to making OpenStack community more user
friendly:

http://boris-42.me/thoughts-on-making-openstack-community-more-user-friendly/



tl;dr;



Patches on review are great source of information what is missing in project.

Removing them from queue means losing this essential information. The result

of such actions is that project doesn't face users requirements which is quite
bad...



What if that project team continue work on all abandoned patches  that are
covering 


valid use cases and finish them?



Best regards,

Boris Pavlovic 








On Fri, Feb 13, 2015 at 3:52 PM, Flavio Percoco fla...@redhat.com wrote:

   On 13/02/15 11:06 +, Kuvaja, Erno wrote:

   Hi all,

   We have almost year old (from last update) reviews still in the queue
   for
   glance. The discussion was initiated on yesterday’s meeting for
   adopting
   abandon policy for stale changes.

   The documentation can be found from https://etherpad.openstack.org/p/
   glance-cleanout-of-inactive-PS and any input would be appreciated. For
   your
   convenience current state below:


   Thanks for putting this together. I missed the meeting yday and this
   is important.

   Glance - Cleanout of inactive change proposals from review


   We Should start cleaning out our review list to keep the focus on
   changes that
   has momentum. Nova is currently abandoning change proposals that has
   been
   inactive for 4 weeks.



   Proposed action (if all of the following is True, abandon the PS):

   1. The PS has -1/-2 (including Jenkins)


   I assume you're talking about voting -1/-2 and not Workflow, right?
   (you said jenkins afterall but just for the sake of clarity).

   2. The change is proposed to glance, glance_store or
   python-glanceclient;
  specs should not be abandoned as their workflow is much slower

   3. No activity for 28 days from Author/Owner after the -1/-2


   I'd reword this in No activity. This includes comments, feedback,
   discussions and or other committers taking over a patch.

   4. There has been  query made to the owner to update the patch between
   5 and
  10 days  before abandoning (comment on PS/Bug or something similar)

● Let's be smart on this. Flexibility is good on holiday seasons,
   during
  feature freeze, etc.


   +2 to the above, I like it.

   Thanks again,
   Flavio

   --
   @flaper87
   Flavio Percoco
  
   __

   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgp1NCUT4nMq9.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [kolla] question about the mount namespace

2015-02-13 Thread Steven Dake (stdake)
Dan,

One of the technical guys here at Cisco asked me a really good technical 
question about libvirt upgrades in containers which I was unable to answer.  My 
suspicion is the linux VM system just sorts it out, but I wanted to get your 
input.

Assume libvirt version 1 is running in a container.  We kill the container, 
qemu processes go to the main pid space as detailed in my blog[1] using the 
—pid=host feature you developed.  Now version #2 of qemu is started during an 
atomic upgrade (pull, stop, start of the container) to libvirt version 2.  How 
does the operating system know to keep a copy of version #1 around until the 
system reboots?

Thanks
-steve

[1] 
http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jay Pipes

On 02/13/2015 10:01 AM, Jordan Pittier wrote:

 What is the difference between just calling the Glance API to upload
an image, versus adding add() functionality to the HTTP image store?
You mean using glance image-create --location
http://server1/myLinuxImage [..] ? If so, I guess adding the add()
functionality will save the user from having to find the right POST
curl/wget command to properly upload his image.


How so?

If the user is already using Glance, they can use either the Glance REST 
API or the glanceclient tools.


-jay


On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 02/13/2015 09:47 AM, Jordan Pittier wrote:

Hi list,

I would like to add the 'add' capability to the HTTP glance store.

Let's say I (as an operator or cloud admin) provide an HTTP
server where
(authenticated/trusted) users/clients can make the following
HTTP request :

POST http://server1/myLinuxImage HTTP/1.1
Host: server1
Content-Length: 25600
Content-Type: application/octet-stream

mybinarydata[..]

Then the HTTP server will store the binary data, somewhere (for
instance
locally), some how (for instance in a plain file), so that the
data is
later on accessible by a simple GET http://server1/myLinuxImage

In that case, this HTTP server could easily be a full fleshed
Glance store.

Questions :
1) Has this been already discussed/proposed ? If so, could
someone give
me a pointer to this work ?
2) Can I start working on this ? (the 2 main work items are :
'add an
add method to glance_store._drivers.http.Store' and 'add a
delete
method to glance_store._drivers.http.Store (HTTP DELETE method)'


What is the difference between just calling the Glance API to upload
an image, versus adding add() functionality to the HTTP image store?

Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.__openstack.org?subject:__unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-13 Thread John Belamaric


Put it in this way, it also makes sense. But I think I need to see it 
translated in code to figure it out properly. Anyway, this is something which 
pertains the base classes rather than the reference driver.
I think from the perspective of the reference driver we should just raise if a 
AnyAddressRequest is sent for a subnet where addresses are supposed to be 
autogenerated, because the ipam driver won't generate the address.


Makes sense.


Hmm. How dynamic is Python? I know in Ruby I could do something like this at 
class load time:

config.use_ipam ? DbBasePluginV2 = IpamDbBasePluginV2 : DbBasePluginV2 = 
LegacyDbBasePluginV2

and all the subclasses would work fine as before...

Technically yes.
From a practical perspective however if the subclass is assuming that 
create_port works in the old way, and then instead is working in the ipam 
way, it might be mayhem!


Yes, certainly. But it provides a transition period for plugins to migrate to 
support the new IPAM model.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] monkey patching strategy

2015-02-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/13/2015 02:33 AM, Kevin Benton wrote:
 Why did the services fail with the stdlib patched? Are they
 incompatible with eventlet?

It's not like *service entry points* are not ready for neutron.* to be
monkey patched, but tools around it (flake8 that imports
neutron.hacking.checks, setuptools that import hooks from
neutron.hooks etc). It's also my belief that base library should not
be monkey patched not to put additional assumptions onto consumers.

(Though I believe that all the code in the tree should be monkey
patched, including those agents that currently run without the library
patched - for consistency and to reflect the same test environment for
unit tests that will be patched from neutron/tests/__init__.py).

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU3hpVAAoJEC5aWaUY1u57nBUIANIjk5j31l2E9+HNvUhiMulP
mIa7zB3PNO+XsFHq9DWWUHzKoY5GVYF9DdLN56wbiLCCD7aoR3XZ/euGRm9NnJzf
Akb6+qsq/1+qge4zG0C33aBc/lted+1RdVU7aDk8pUpUbWAEW833EqwolXdtg0RF
aljsg5U3759MPpZpRV8o/GzHScmTvWenn6hLzXQ2frGFHoR4OpPkEctME1LAmNxs
GqfKeK5ST0wVsCimTFMM/BV7zQTQSeiMNYQesnqTh4V3mjaS3UiHJfCQF9KtiJJG
AaMEW3wyBAO3ew373oXUMTd4k0HhO34RuK4eznYdq7BFQO6KDjFSC1HtvFJaPCk=
=KckY
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-13 Thread Debojyoti Dutta
Hi Ruby

Good point. If you assume congress to be present, then scheduling and most
actions are a result of a policy decision and might not impact the
scheduler. The results of the LP/CVP would allow you to spawn resources at
end points.

However if you assume that there is congress + other entities, then it
might be better/cleaner to use a separate solver scheduling layer to decide
if the policies and constraints from congress are not in conflict with
other entities that might be asking for resources.

I guess its about how the community wants to layer the components. We
wanted to first build the simple constraint solver layer and then hope that
policy layers would talk to the advanced scheduler and this scheduler
driver would fit into nova scheduler or Gantt.

debo

On Fri, Feb 13, 2015 at 7:51 AM, ruby.krishnasw...@orange.com wrote:

  Hello Debo/Tim



 My understanding is that with Congress things like filters (e.g.
 anti-affinity or other aggregates) will be replaced to be written as
 policies with Datalog.

 Goals (a Policy), Constraints (policies in Congress) will also get
 translated to (for example) linear programs in some modelling language
 (e.g. PuLP).



 So this is likely to be a major change to the scheduler?



 Ruby



 *De :* Debojyoti Dutta [mailto:ddu...@gmail.com]
 *Envoyé :* vendredi 13 février 2015 14:06
 *À :* OpenStack Development Mailing List (not for usage questions)
 *Objet :* Re: [openstack-dev] [Congress][Delegation] Google doc for
 working notes



 Tim



 Wanted to clarify a bit. As I have mentioned before: Solver scheduler is
 work done before this work (Datalog-constraints) but we had kept it
 very generic to be integrated with something like congress. In fact Ramki
 (who was one of the members of the original thread when you reached out to
 us) joined us to in talk in Atlanta where we described some of the same use
 cases using PULP  congress was still ramping up then. We were not aware
 of the Datalog-constraints work that you guys were doing, else we would
 have joined hands before.



 The question is this: going forward, how do build this cool stuff together
 in the community? I am hoping the scheduler folks will be very excited too!



 debo



 On Thu, Feb 12, 2015 at 11:27 AM, Yathiraj Udupi (yudupi) 
 yud...@cisco.com wrote:

 Hi Tim,



 Thanks for your response.  Excited too to extend the collaboration and
 ensure there is no need to duplicate effort in the open source community.

  My responses inline.



  1)  Choice of LP solver.



 I see solver-scheduler uses Pulp, which was on the Congress short list as
 well.  So we’re highly aligned on the choice of underlying solver.



 YATHI - This makes me wonder why can’t we easily adapt the
 solver-scheduler to your needs, rather than duplicating the effort!





 2) User control over VM-placement.





 To choose the criteria for VM-placement, the solver-scheduler user picks
 from a list of predefined options, e.g. ActiveHostConstraint,
 MaxRamAllocationPerHostConstraint.



 We’re investigating a slightly different approach, where the user defines
 the criteria for VM-placement by writing any policy they like in Datalog.
 Under the hood we then convert that Datalog to an LP problem.  From the
 developer’s perspective, with the Congress approach we don’t attempt to
 anticipate the different policies the user might want and write code for
 each policy; instead, we as developers write a translator from Datalog to
 LP.  From the user’s perspective, the difference is that if the option they
 want isn’t on the solver-scheduler's list, they’re out of luck or need to
 write the code themselves.  But with the Congress approach, they can write
 any VM-placement policy they like.



 What I’d like to see is the best of both worlds.  Users write Datalog
 policies describing whatever VM-placement policy they want.  If the policy
 they’ve written is on the solver-scheduler’s list of options, we use the
 hard-coded implementation, but if the policy isn’t on that list we
 translate directly to LP.  This approach gives us the ability to write
 custom code to handle common cases while at the same time letting users
 write whatever policy they like.





 YATHI -  The idea of providing some default constraint classes in Solver
 Scheduler was to enable easy pluggability for various placement policy
 scenarios.  We can easily add a custom constraint class in solver
 scheduler, that enables adding additional constraints at runtime (PulP
 model or any other models we can use and support).  It will just take in
 any external policy (say Datalog in Congress example), and it can easily
 add those set of resulting translated constraints via this custom
 constraint builder class.  This is something we can definitely add value to
 solver scheduler by implementing and adding here.





 3) API and architecture.



 Today the solver-scheduler's VM-placement policy is defined at config-time
 (i.e. not run-time).  Am I correct that 

Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
Hi,

Getting so mixed that I’ll jump to the inline commenting as well.

From: Boris Pavlovic [mailto:bo...@pavlovic.me]
Sent: 13 February 2015 15:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals 
from review

Erno,


My personal take is that if some piece of work has not been touched for a 
month, it’s probably not that important after all and the community should use 
the resources to do some work that has actual momentum.

Based on my experience, one of the most common situation in OpenStack is next:
1) Somebody makes fast (but with right idea) changes, because he (and usually 
others) needs it
2) It doesn't pass review process fast
3) Author of this patch has billions others tasks (not related to upstream) and
can't work on this change anymore
4) Patch get's abounded and forgotten

I’m unfortunately starting to sound like a broken record but again. If no-one 
has touched the change (or taken it over) in 4 weeks at the point when there is 
clear indication that if the change does not get traction it will be cleaned 
from the review, it’s probably not worth of keeping there any longer either.

The changes itself will not disappear the owner is still able to revive it if 
felt that there is right time to continue it.

Nobody never reviews abounded changes..

Repeating the previous, if your change gets abandoned because of inactivity and 
you don’t care about it, why should someone else who haven’t cared so far?

 The cleanup will just make it easier for people to focus on things that are 
actually moving.

Making decision based on activity around patches is not the best way to do 
things.

So what would be better way to do it? We have currently 4 pages of change 
proposals in the review that has been touched by anyone in Feb. Honest 
question, who scrolls further than that or even down to that 4th page? From 
page 6 forward there are changes that has been last time touched last year. And 
this is purely from “updated” column, so I did not look when the 
owner/author/committer has last time touched these.

If we take a look at the structure of OpenStack projects we will see next 
things:

1) Things that re moving fast/good are usually related to things that core team 
(or active members) are working on.
This team is resolving limited set of use cases (this is mostly related because 
not every member is running it's own production cloud)

This is very true, dropping that core team away and let’s keep the active 
members here. Because it’s a community it’s extremely difficult to get people 
working on something else than what they or their employers sees important.

2) Operators/Admins/DevOps that are running their own cloud have a lot of 
experience knows a lot of missing use cases and
source of issues. But usually they are not involved in community process so 
they don't know whole road map of project, so they are not able to fully align 
their patches with road map, or eventually just don't have enough time to work 
on features.

So abounding patches from 2 group just because of inactivity can make big harm 
to project.

I don’t think pushing for activity is bad thing and would do big harm for 
project(s). These are matters of priority and I do not see any benefit keeping 
changes in review that haven’t been touched for months (current situation). If 
this group 2 is the fundamental issue of our changes stalling in review, we 
need to fix that rather than let it clutter the queue. We are talking open 
source project and community here. I find it extremely hard to justify asking 
anyone in the community to take responsibility of someone else’s production 
cloud if they have no interest to resource the above for the benefit of their 
own business.

 Do you have resources in mind to dedicate for this?

Sometimes I am doing it by my self, sometimes newbies in community (that want 
some work to get involved), sometimes core team is working on old patches.

We will not run out of the bug fixing work and the commits against those bugs 
will stay in the bug even after they get abandoned.

Important chagesets are supposed to have bugs (or blueprints) assigned
to them, so, even if the CS is abandoned, its description still
remains on Launchpad in one form or another, so we will not loose it
from general project's backlog

This is not true in a lot of cases. =)
 In many cases DevOps/Operators don't know or don't want to spend time for 
launchpad/specs/ and so on.

Then we need to educate and encourage them instead of support the behavior of 
“Throw it in and someone at some day will maybe take care of it”-attitude.


-  Erno

Best regards,
Boris Pavlovic


On Fri, Feb 13, 2015 at 5:17 PM, Kuvaja, Erno 
kuv...@hp.commailto:kuv...@hp.com wrote:
Hi Boris,

Thanks for your input. I do like the idea of picking up the changes that have 
not been active. Do you have resources in mind to dedicate for this?

My 

[openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-13 Thread Robert Li (baoli)
Hi,

while trying to integrate dibbler client with neutron to support PD, we 
countered a few issues with the dibbler client (and server). With a neutron 
router, we have the qg-xxx interface that is connected to the public network, 
on which a dhcp server is running on the delegating router. For each subnet 
with PD enabled, a router port will be created in the neutron router. As a 
result, a new PD request will be sent that asks for a prefix from the 
delegating router. Keep in mind that the subnet is added into the router 
dynamically.

We thought about the following options:

  1.  use a single dibbler client to support the above requirement. This means, 
the client should be able to accept new requests on the fly either through 
configuration reload or other interfaces. Unfortunately, dibbler client doesn’t 
support it.
  2.  start a dibbler client per subnet. All of the dibbler clients will be 
using the same outgoing interface (which is the qg-xxx interface). 
Unfortunately, dibbler client uses /etc/dibbler and /var/lib/dibbler for its 
state (in which it saves duid file, pid file, and other internal states). This 
means it can only support one client per network node.
  3.  run a single dibbler client that requests a smaller prefix (say /56) and 
splits it among the subnets with PD enabled (neutron subnet requires /64). 
Depending on the neutron router setup, this may result in significant waste of 
prefixes.

Given the significant drawback with 3, we are left with 1 and 2. After looking 
at the dibbler source code, we found that 2 is easier to achieve for now by 
making some small changes in the dibbler code. In the long run, we think option 
1 is better.

The changes we made to the linux dibbler client code, and the dibbler server 
code can be found in here: 
https://github.com/johndavidge/dibbler/tree/cloud-dibbler. Basically it does a 
few things:
  — create a unique working area per dibbler client
  — since all the clients use the same outgoing interface, we’d like each 
dibbler client to use a unique LLA as its source address when sending messages. 
This would avoid clients to receive server messages that are not intended for 
them.
  — we found that dibbler server uses transaction ID alone to identify a match 
between a request and an answer. This would require that unique transaction IDs 
be used among all the existing clients. We found that clients could use the 
same transaction IDs in our environment. Therefore, a little change is made in 
the server code so that it will take the request sender into consideration 
while looking up a match.


Option 1 requires better understanding of the dibbler code, and we think that 
it may not be possible to make it happen in the kilo timeframe. But we think it 
has significant advantages over option 2. Regardless, changes made for 2 is 
also needed since we need to run one dibbler client per neutron router.

Now the issue is how to make those changes (possible with further revision) 
into an official dibbler release ASAP so that we can use them for kilo release. 
John Davidge has contacted the dibbler authors, and hasn’t received response so 
far.

Comments and thoughts are welcome.

Cheers,
—Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
Humm this doesn't have to be complicated, for a start.

- Figuring out the http method the server expects (POST/PUT)
Yeah, I agree. Theres no definitive answer to this but I think PUT makes
sense here. I googled 'post vs put' and I found that the idempotent and
who is in charge of the actual resource location choice (the client vs
the server), favors PUT.

- Adding support for at least few HTTP auth methods
Why should the write path be more secured/more flexible than the read
path ? If I take a look at the current HTTP store, only basic auth is
supported (ie http://user:pass@server1/myLinuxImage). I suggest the write
path (ie the add() method) should support the same auth mecanism. The cloud
admin could also add some firewall rules to make sure the HTTP backend
server can only be accessed by the Glance-api servers.

- Having a sufixed URL where we're sure glance will have proper
 permissions to upload data.
That's up the the cloud admin/operator to make it work. The HTTP
glance_store could have 2 config flags :
a) http_server, a string with the scheme (http vs https) and the hostname
of the HTTP server, ie 'http://server1'
b) path_prefix. A string that will prefix the path part of the image
URL. This config flag could be left empty/is optional.

Handling HTTP responses from the server
That's of course to be discussed. But, IMO, this should be as simple as if
response.code is 200 or 202 then OKAY else raise GlanceStoreException. I
am not sure any other glance store is more granular than this.

How can we handle quota?
I am new to glance_store, is there a notion of quotas in glance stores ? I
though Glance (API) was handling this. What kind of quotas are we talking
about here ?

Frankly, it shouldn't add that much code. I feel we can make it clean if we
leverage the different Python modules (httplib etc.)

Regards,
Jordan


On Fri, Feb 13, 2015 at 4:20 PM, Flavio Percoco fla...@redhat.com wrote:

 On 13/02/15 16:01 +0100, Jordan Pittier wrote:

 What is the difference between just calling the Glance API to upload an
 image,

 versus adding add() functionality to the HTTP image store?
 You mean using glance image-create --location http://server1/
 myLinuxImage [..]
  ? If so, I guess adding the add() functionality will save the user from
 having to find the right POST curl/wget command to properly upload his
 image.


 I believe it's more complex than this. Having an `add` method for the
 HTTP store implies:

 - Figuring out the http method the server expects (POST/PUT)
 - Adding support for at least few HTTP auth methods
 - Having a sufixed URL where we're sure glance will have proper
  permissions to upload data.
 - Handling HTTP responses from the server w.r.t the status of the data
  upload. For example: What happens if the remote http server runs out
  of space? What's the response status going to be like? How can we
  make glance agnostic to these discrepancies across HTTP servers so
  that it's consistent in its responses to glance users?
 - How can we handle quota?

 I'm not fully opposed, although it sounds like not worth it code-wise,
 maintenance-wise and performance-wise. The user will have to run just
 1 command but at the cost of all of the above.

 Do the points listed above make sense to you?

 Cheers,
 Flavio



 On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

On 02/13/2015 09:47 AM, Jordan Pittier wrote:
  Hi list,

I would like to add the 'add' capability to the HTTP glance store.

Let's say I (as an operator or cloud admin) provide an HTTP server
where
(authenticated/trusted) users/clients can make the following HTTP
request :

POST http://server1/myLinuxImage HTTP/1.1
Host: server1
Content-Length: 25600
Content-Type: application/octet-stream

mybinarydata[..]

Then the HTTP server will store the binary data, somewhere (for
instance
locally), some how (for instance in a plain file), so that the
 data is
later on accessible by a simple GET http://server1/myLinuxImage

In that case, this HTTP server could easily be a full fleshed
 Glance
store.

Questions :
1) Has this been already discussed/proposed ? If so, could someone
 give
me a pointer to this work ?
2) Can I start working on this ? (the 2 main work items are : 'add
 an
add method to glance_store._drivers.http.__Store' and 'add a
 delete
method to glance_store._drivers.http.__Store (HTTP DELETE method)'


What is the difference between just calling the Glance API to upload an
image, versus adding add() functionality to the HTTP image store?

Best,
-jay


 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe

Re: [openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for the write-up! See inline.

On 02/13/2015 04:34 PM, Robert Li (baoli) wrote:
 Hi,
 
 while trying to integrate dibbler client with neutron to support
 PD, we countered a few issues with the dibbler client (and server).
 With a neutron router, we have the qg-xxx interface that is
 connected to the public network, on which a dhcp server is running
 on the delegating router. For each subnet with PD enabled, a router
 port will be created in the neutron router. As a result, a new PD
 request will be sent that asks for a prefix from the delegating
 router. Keep in mind that the subnet is added into the router
 dynamically.
 
 We thought about the following options:
 
 1. use a single dibbler client to support the above requirement.
 This means, the client should be able to accept new requests on the
 fly either through configuration reload or other interfaces. 
 Unfortunately, dibbler client doesn’t support it.

Sorry for my ignorance on PD implementation (I will definitely look at
it the next week), but what does this entry above mean? Do you want a
single dibbler instance running per router serving all subnets plugged
into it? And you want to get configuration updates when a new subnet
is plugged in, or removed from the router?

If that's the case, why not just restarting the client?

 2. start a dibbler client per subnet. All of the dibbler clients
 will be using the same outgoing interface (which is the qg-xxx 
 interface). Unfortunately, dibbler client uses /etc/dibbler and 
 /var/lib/dibbler for its state (in which it saves duid file, pid 
 file, and other internal states). This means it can only support
 one client per network node. 3. run a single dibbler client that
 requests a smaller prefix (say /56) and splits it among the subnets
 with PD enabled (neutron subnet requires /64). Depending on the
 neutron router setup, this may result in significant waste of
 prefixes.

Just to understand all options at the table: can we implement ^^
option with stock dibbler?

 
 Given the significant drawback with 3, we are left with 1 and 2.
 After looking at the dibbler source code, we found that 2 is easier
 to achieve for now by making some small changes in the dibbler
 code. In the long run, we think option 1 is better.
 
 The changes we made to the linux dibbler client code, and the
 dibbler server code can be found in here: 
 https://github.com/johndavidge/dibbler/tree/cloud-dibbler.
 Basically it does a few things: — create a unique working area per
 dibbler client — since all the clients use the same outgoing
 interface, we’d like each dibbler client to use a unique LLA as its
 source address when sending messages. This would avoid clients to
 receive server messages that are not intended for them. — we found
 that dibbler server uses transaction ID alone to identify a match
 between a request and an answer. This would require that unique 
 transaction IDs be used among all the existing clients. We found
 that clients could use the same transaction IDs in our
 environment. Therefore, a little change is made in the server code
 so that it will take the request sender into consideration while
 looking up a match.
 
 
 Option 1 requires better understanding of the dibbler code, and we
 think that it may not be possible to make it happen in the kilo
 timeframe. But we think it has significant advantages over option
 2. Regardless, changes made for 2 is also needed since we need to
 run one dibbler client per neutron router.
 
 Now the issue is how to make those changes (possible with further 
 revision) into an official dibbler release ASAP so that we can use
 them for kilo release. John Davidge has contacted the dibbler
 authors, and hasn’t received response so far.

That's disturbing from packager point of view.

- From Red Hat side, we miss dibbler packaged in Fedora, but that's a
minor issue, we can easily put some effort and do it.

As for shipping side patches with it, it's a major problem. Especially
considering the fact that those patches were not reviewed or accepted
by dibbler upstream.

I think it is critical to reach dibbler authors ASAP and see what they
have to say about these patches. Remember, we're in Kilo-3 mode already.

Reading thru the spec [1], it seems to me that dibbler won't be
involved at all unless users explicitly omit prefix info when creating
subnets. Is it correct? If so, can we somehow provide an easy way for
distributions and deployers out of using custom patched dibbler by
disabling the feature completely if/when they see shipping this
version of dibbler unacceptable? I think we could introduce a new
config option that would forbid prefix delegated subnets (?).

Speaking of deployers, have you also considered providing a patch for
sanity_check tool that would test local dibbler installation on
whether it supports the needed features?

Also, on a relevant note, how do you test the feature in gate? Distro
packages probably 

Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-13 Thread Salvatore Orlando
On 13 February 2015 at 16:22, John Belamaric jbelama...@infoblox.com
wrote:



   From: Salvatore Orlando sorla...@nicira.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, February 13, 2015 at 8:26 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Update on DB IPAM driver

 ...


  I think the auto-generated case should be a base class as you
 described in [1], but each subclass would implement the specific
 auto-generation. See the discussion at line 468 in [2] and see what you
 think. Of course for addresses that come from RA there would be no IPAM.


  I think this makes sense.



  Thinking a little more on this, in the case of magic address prefixes,
 we probably should have the factory method generate the right request
 class. That way, the logic for those magic prefixes is all in one place.
 You could still specify the class in the request but the magic prefixes
 would take priority.


Put it in this way, it also makes sense. But I think I need to see it
translated in code to figure it out properly. Anyway, this is something
which pertains the base classes rather than the reference driver.
I think from the perspective of the reference driver we should just raise
if a AnyAddressRequest is sent for a subnet where addresses are supposed
to be autogenerated, because the ipam driver won't generate the address.





  [1] https://review.openstack.org/#/c/150485/
  [2]
 https://review.openstack.org/#/c/153236/2/neutron/db/db_base_plugin_v2.py,unified




  - The db base refactoring being performed by Pavel is under way [3]. It
 is worth noting that this is a non-negligible change to some of Neutron's
 basic and more critical workflows. We should expect pushback from the
 community regarding the introduction of this change in the 3rd milestone.
 At this stage I would suggest either:
 A) consider a strategy for running pluggable IPAM as optional
 B) consider delaying to Liberty.
 (and that's where I get virtually jeered and pelted with rotten tomatoes)


  I wish I had some old tomatoes! Seriously, I think A is a reasonable
 approach. To make this really explicit we may want to basically replace the
 DB plugin class with a shim that delegates to either the current
 implementation or the new implementation, depending on the flag.


  The fact that the current implementation is pretty much a bunch of
 private methods in the db base plugin class executed within a transaction
 for creating a port makes the situation a wee bit more complex. I'm not
 sure we can replace the db plugin class with a shim so easily, because we
 need to consider the impact on plugins which inherit from this base class.
 For instance some plugins override methods from the base class, and this
 would be a problem. For those plugins we must ensure old-style IPAM is
 performed. A transitory solution might be to have, for the relevant methods
 2 versions - one would be the current one, and the other one would be the
 one leveraging pluggable IPAM. During plugin initialisation, the plugin
 itself will decide whether use or not the latter methods. This might be
 tuneable with a configuration parameter too. The downside of this approach
 is that it will not allow us to remove old baked in IPAM code, and will
 have an impact on code maintainability which ultimately will result in
 accumulating even more technical debt. However, I might be missing some
 better alternative, so if you have any proposal just let me know.


  Hmm. How dynamic is Python? I know in Ruby I could do something like
 this at class load time:

  config.use_ipam ? DbBasePluginV2 = IpamDbBasePluginV2 : DbBasePluginV2 =
 LegacyDbBasePluginV2

  and all the subclasses would work fine as before...


Technically yes.
From a practical perspective however if the subclass is assuming that
create_port works in the old way, and then instead is working in the
ipam way, it might be mayhem!





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread James E. Blair
Kuvaja, Erno kuv...@hp.com writes:

 Hi all,

 We have almost year old (from last update) reviews still in the queue
 for glance. The discussion was initiated on yesterday's meeting for
 adopting abandon policy for stale changes.

Hi,

Abandoning changes submitted by other people is not a good experience
for people who are contributing to OpenStack, but fortunately, it is not
necessary.

Our current version of Gerrit supports a rich syntax for searching,
which you can use to create personal or project dashboards.  It is quite
easy to filter out changes that appear old or inactive, without the
negative experience of having them abandoned.

Many projects, including all of the infra projects (which see a
substantial number of changes) are able to function without
automatically abandoning changes.

If you could identify why you feel the need to abandon other peoples
changes, I'm sure we can find a resolution.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What should openstack-specs review approval rules be ?

2015-02-13 Thread James E. Blair
Thierry Carrez thie...@openstack.org writes:

 Current Cross-Project Repo Rules
 
...
 * Only the TC chair may vote Workflow +1.

 My understanding is that currently, any TC member can Workflow+1 (which
 lead to the accidental approval of the previous spec).

I think that was instachanged by Doug after the TC meeting:

  https://review.openstack.org/#/c/150581/

So the immediate problem is abated, and we can deliberate about any
other changes.

 Additionally, we could write a custom submit rule that requires at
 least 7 +1 votes in order for the change to be submittable.

 Our voting rules are slightly more complex than that, as you can see here:

 http://git.openstack.org/cgit/openstack/governance/tree/reference/charter.rst#n77

 The rule actually is more positive votes than negative votes, and a
 minimum of 5 positive votes. The 7 YES rule is just a shortcut: once
 we reach it, we can safely approve (otherwise we basically have to wait
 for all votes to be cast, which with asynchronous voting, and the
 difficulty to distinguish +0 from not voted yet, is not so easy to
 achieve).

 So unless you can encode that in a rule, I'd keep it under the
 responsibility of the chair to properly (and publicly) tally the votes
 (which I do in the +2 final comment) according to our charter rules.

The mechanism for such a change is Prolog.  I suspect that encoding that
rule is possible, though I am not familiar enough with Prolog to say for
sure.  The part of me that loves learning new programming languages
wants to find out.  But part of me agrees with you and thinks we should
just leave it to the Chair.

Actually, I think I may have missed a requirement that would preclude
the use of that rule.  We may consider Chair is able to approve trivial
administrative changes to the governance repo without a full vote as a
requirement, in which case we want the status quo.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-13 Thread Salvatore Orlando
On 13 February 2015 at 17:16, John Belamaric jbelama...@infoblox.com
wrote:



  Put it in this way, it also makes sense. But I think I need to see it
 translated in code to figure it out properly. Anyway, this is something
 which pertains the base classes rather than the reference driver.
 I think from the perspective of the reference driver we should just raise
 if a AnyAddressRequest is sent for a subnet where addresses are supposed
 to be autogenerated, because the ipam driver won't generate the address.



  Makes sense.


  Hmm. How dynamic is Python? I know in Ruby I could do something like
 this at class load time:

  config.use_ipam ? DbBasePluginV2 = IpamDbBasePluginV2 : DbBasePluginV2
 = LegacyDbBasePluginV2

  and all the subclasses would work fine as before...


  Technically yes.
 From a practical perspective however if the subclass is assuming that
 create_port works in the old way, and then instead is working in the
 ipam way, it might be mayhem!



  Yes, certainly. But it provides a transition period for plugins to
 migrate to support the new IPAM model.


I think we need to check Pavel's work on doing the plumbing for the IPAM
driver to assess whether it would be more convenient to have an IPAM base
class which extends and override DbBasePluginV2 or have 'ipam' versions of
methods in such class. In the latter case the approach wouldn't be too
different from the one you envisioned for classes, eg.:


if CONF.enable_ipam_drivers:
self.create_port = self.create_port_ipam

(otherwise self.create_port would be the same method that we have today)

Salvatore


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-13 Thread Kyle Mestery
I was traveling for two days, and I miss a great thread like this. Go
figure! One comment in-line.

On Wed, Feb 11, 2015 at 3:55 AM, Flavio Percoco fla...@redhat.com wrote:

 Greetings all,

 During the last two cycles, I've had the feeling that some of the
 things I love the most about this community are degrading and moving
 to a state that I personally disagree with. With the hope of seeing
 these things improve, I'm taking the time today to share one of my
 concerns.

 Since I believe we all work with good faith and we *all* should assume
 such when it comes to things happening in our community, I won't make
 names and I won't point fingers - yes, I don't have enough fingers to
 point based on the info I have. People that fall into the groups I'll
 mention below know that I'm talking to them.

 This email is dedicated to the openness of our community/project.

 ## Keep discussions open

 I don't believe there's anything wrong about kicking off some
 discussions in private channels about specs/bugs. I don't believe
 there's anything wrong in having calls to speed up some discussions.
 HOWEVER, I believe it's *completely* wrong to consider those private
 discussions sufficient. If you have had that kind of private
 discussions, if you've discussed a spec privately and right after you
 went upstream and said: This has been discussed in a call and it's
 good to go, I beg you to stop for 2 seconds and reconsider that. I
 don't believe you were able to fit all the community in that call and
 that you had enough consensus.

 Furthermore, you should consider that having private conversations, at
 the very end, doesn't help with speeding up discussions. We've a
 community of people who *care* about the project they're working on.
 This means that whenever they see something that doesn't make much
 sense, they'll chime in and ask for clarification. If there was a
 private discussion on that topic, you'll have to provide the details
 of such discussion and bring that person up to date, which means the
 discussion will basically start again... from scratch.

 ## Mailing List vs IRC Channel

 I get it, our mailing list is freaking busy, keeping up with it is
 hard and time consuming and that leads to lots of IRC discussions. I
 don't think there's anything wrong with that but I believe it's wrong
 to expect *EVERYONE* to be in the IRC channel when those discussions
 happen.

 If you are discussing something on IRC that requires the attention of
 most of your project's community, I highly recommend you to use the
 mailing list as oppose to pinging everyone independently and fighting
 with time zones. Using IRC bouncers as a replacement for something
 that should go to the mailing list is absurd. Please, use the mailing
 list and don't be afraid of having a bigger community chiming in in
 your discussion.  *THAT'S A GOOD THING*

 Changes, specs, APIs, etc. Everything is good for the mailing list.
 We've fought hard to make this community grow, why shouldn't we take
 advantage of it?

 ## Cores are *NOT* special

 At some point, for some reason that is unknown to me, this message
 changed and the feeling of core's being some kind of superheros became
 a thing. It's gotten far enough to the point that I've came to know
 that some projects even have private (flagged with +s), password
 protected, irc channels for core reviewers.

 This is not right and I don't believe core reviewers (note I did not
just say core, but core reviewer) are special in any way. In fact, they are
likely less special because they have a huge responsibility: Reviewing code
in a timely manner and merging changes to close bugs and features! This is
nothing special other than much more additional work. I think more projects
need to do a better job of ensuring their core reviewers are actually
reviewing code, and it's a good idea to in fact cycle core reviewers in and
out more frequently. Otherwise, a sense of entitlement can in fact occur,
and this is where things go bad.


 This is the point where my good faith assumption skill falls short.
 Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K?

 THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS* TO
 DISCUSS.

 If anything core reviewers should be the ones *FORCING* - it seems
 that *encouraging* doesn't have the same effect anymore - *OPENNESS* in
 order to include other non-core members in those discussions.

 Remember that the core flag is granted because of the reviews that
 person has provided and because that individual *WANTS* to be part of
 it. It's not a prize for people. In fact, I consider core reviewers to
 be volunteers and their job is infinitely thanked.

 This is a very good point and I agree with it.


 Since, All generalizations are false, including this one. - Mark
 Twain, I'm pretty sure there are folks that disagree with the above.
 If you do, I care about your thoughts. This is worth discussing and
 fighting for.

 All the above being said, I'd like to 

Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-13 Thread ruby.krishnaswamy
Hello Debo/Tim

My understanding is that with Congress things like filters (e.g. anti-affinity 
or other aggregates) will be replaced to be written as policies with Datalog.
Goals (a Policy), Constraints (policies in Congress) will also get translated 
to (for example) linear programs in some modelling language (e.g. PuLP).

So this is likely to be a major change to the scheduler?

Ruby

De : Debojyoti Dutta [mailto:ddu...@gmail.com]
Envoyé : vendredi 13 février 2015 14:06
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

Tim

Wanted to clarify a bit. As I have mentioned before: Solver scheduler is work 
done before this work (Datalog-constraints) but we had kept it very 
generic to be integrated with something like congress. In fact Ramki (who was 
one of the members of the original thread when you reached out to us) joined us 
to in talk in Atlanta where we described some of the same use cases using PULP 
 congress was still ramping up then. We were not aware of the 
Datalog-constraints work that you guys were doing, else we would have joined 
hands before.

The question is this: going forward, how do build this cool stuff together in 
the community? I am hoping the scheduler folks will be very excited too!

debo

On Thu, Feb 12, 2015 at 11:27 AM, Yathiraj Udupi (yudupi) 
yud...@cisco.commailto:yud...@cisco.com wrote:
Hi Tim,

Thanks for your response.  Excited too to extend the collaboration and ensure 
there is no need to duplicate effort in the open source community.
 My responses inline.

1)  Choice of LP solver.

I see solver-scheduler uses Pulp, which was on the Congress short list as well. 
 So we’re highly aligned on the choice of underlying solver.

YATHI - This makes me wonder why can’t we easily adapt the solver-scheduler to 
your needs, rather than duplicating the effort!


2) User control over VM-placement.


To choose the criteria for VM-placement, the solver-scheduler user picks from a 
list of predefined options, e.g. ActiveHostConstraint, 
MaxRamAllocationPerHostConstraint.

We’re investigating a slightly different approach, where the user defines the 
criteria for VM-placement by writing any policy they like in Datalog.  Under 
the hood we then convert that Datalog to an LP problem.  From the developer’s 
perspective, with the Congress approach we don’t attempt to anticipate the 
different policies the user might want and write code for each policy; instead, 
we as developers write a translator from Datalog to LP.  From the user’s 
perspective, the difference is that if the option they want isn’t on the 
solver-scheduler's list, they’re out of luck or need to write the code 
themselves.  But with the Congress approach, they can write any VM-placement 
policy they like.

What I’d like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they’ve 
written is on the solver-scheduler’s list of options, we use the hard-coded 
implementation, but if the policy isn’t on that list we translate directly to 
LP.  This approach gives us the ability to write custom code to handle common 
cases while at the same time letting users write whatever policy they like.


YATHI -  The idea of providing some default constraint classes in Solver 
Scheduler was to enable easy pluggability for various placement policy 
scenarios.  We can easily add a custom constraint class in solver scheduler, 
that enables adding additional constraints at runtime (PulP model or any other 
models we can use and support).  It will just take in any external policy (say 
Datalog in Congress example), and it can easily add those set of resulting 
translated constraints via this custom constraint builder class.  This is 
something we can definitely add value to solver scheduler by implementing and 
adding here.


3) API and architecture.

Today the solver-scheduler's VM-placement policy is defined at config-time 
(i.e. not run-time).  Am I correct that this limitation is only because there’s 
no API call to set the solver-scheduler’s policy?  Or is there some other 
reason the policy is set at config-time?

Congress policies change at runtime, so we’ll definitely need a VM-placement 
engine whose policy can be changed at run-time as well.

YATHI -  We have working code to set VM placement policies at run-time to 
dynamically select the constraint or cost classes to use.   It is yet to 
upstreamed to solver scheduler stackforge repo, but will be soon.  But yeah I 
agree with you, this is definitely needed for any policy-driven VM placement 
engine, as the policies are dynamic. Short answer, yes solver scheduler has 
abilities to support this.


If we focus on just migration (and not provisioning), we can build a 
VM-placement engine that sits outside of Nova that has an API call that allows 
us to set policy at runtime.  We can also set up that engine 

Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
Jay, I am afraid I didn't understand your point.

Could you rephrase/elaborate on What is the difference between just
calling the Glance API to upload an image, versus adding add() please ?
Currently, you can't call the Glance API to upload an image if the
default_store is the HTTP store.

On Fri, Feb 13, 2015 at 5:17 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/13/2015 10:01 AM, Jordan Pittier wrote:

  What is the difference between just calling the Glance API to upload
 an image, versus adding add() functionality to the HTTP image store?
 You mean using glance image-create --location
 http://server1/myLinuxImage [..] ? If so, I guess adding the add()
 functionality will save the user from having to find the right POST
 curl/wget command to properly upload his image.


 How so?

 If the user is already using Glance, they can use either the Glance REST
 API or the glanceclient tools.

 -jay

  On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 02/13/2015 09:47 AM, Jordan Pittier wrote:

 Hi list,

 I would like to add the 'add' capability to the HTTP glance store.

 Let's say I (as an operator or cloud admin) provide an HTTP
 server where
 (authenticated/trusted) users/clients can make the following
 HTTP request :

 POST http://server1/myLinuxImage HTTP/1.1
 Host: server1
 Content-Length: 25600
 Content-Type: application/octet-stream

 mybinarydata[..]

 Then the HTTP server will store the binary data, somewhere (for
 instance
 locally), some how (for instance in a plain file), so that the
 data is
 later on accessible by a simple GET http://server1/myLinuxImage

 In that case, this HTTP server could easily be a full fleshed
 Glance store.

 Questions :
 1) Has this been already discussed/proposed ? If so, could
 someone give
 me a pointer to this work ?
 2) Can I start working on this ? (the 2 main work items are :
 'add an
 add method to glance_store._drivers.http.Store' and 'add a
 delete
 method to glance_store._drivers.http.Store (HTTP DELETE
 method)'


 What is the difference between just calling the Glance API to upload
 an image, versus adding add() functionality to the HTTP image store?

 Best,
 -jay

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 OpenStack-dev-request@lists.__openstack.org?subject:__unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Canceling next week's Neutron meeting

2015-02-13 Thread Kyle Mestery
Folks, next Monday is Presidents Day [1] here in the US, so given we'll
likely have a very low turnout at the meeting, I'm going to cancel the
weekly Neutron meeting [2].

However, I encourage people to continue reviewing specs for Kilo-3 [3]. We
have a lot of patches out for review, so the more we can merge in the
coming weeks the easier it will be us as we get closer to the Kilo-3
deadline.

Thanks!
Kyle

[1] http://en.wikipedia.org/wiki/Washington%27s_Birthday
[2] https://wiki.openstack.org/wiki/Network/Meetings
[3] https://launchpad.net/neutron/+milestone/kilo-3
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-13 Thread John Belamaric


From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 13, 2015 at 8:26 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Update on DB IPAM driver
...

I think the auto-generated case should be a base class as you described in [1], 
but each subclass would implement the specific auto-generation. See the 
discussion at line 468 in [2] and see what you think. Of course for addresses 
that come from RA there would be no IPAM.

I think this makes sense.


Thinking a little more on this, in the case of magic address prefixes, we 
probably should have the factory method generate the right request class. That 
way, the logic for those magic prefixes is all in one place. You could still 
specify the class in the request but the magic prefixes would take priority.



[1] https://review.openstack.org/#/c/150485/
[2] 
https://review.openstack.org/#/c/153236/2/neutron/db/db_base_plugin_v2.py,unified




- The db base refactoring being performed by Pavel is under way [3]. It is 
worth noting that this is a non-negligible change to some of Neutron's basic 
and more critical workflows. We should expect pushback from the community 
regarding the introduction of this change in the 3rd milestone. At this stage I 
would suggest either:
A) consider a strategy for running pluggable IPAM as optional
B) consider delaying to Liberty.
(and that's where I get virtually jeered and pelted with rotten tomatoes)

I wish I had some old tomatoes! Seriously, I think A is a reasonable 
approach. To make this really explicit we may want to basically replace the DB 
plugin class with a shim that delegates to either the current implementation or 
the new implementation, depending on the flag.

The fact that the current implementation is pretty much a bunch of private 
methods in the db base plugin class executed within a transaction for creating 
a port makes the situation a wee bit more complex. I'm not sure we can replace 
the db plugin class with a shim so easily, because we need to consider the 
impact on plugins which inherit from this base class. For instance some plugins 
override methods from the base class, and this would be a problem. For those 
plugins we must ensure old-style IPAM is performed. A transitory solution might 
be to have, for the relevant methods 2 versions - one would be the current one, 
and the other one would be the one leveraging pluggable IPAM. During plugin 
initialisation, the plugin itself will decide whether use or not the latter 
methods. This might be tuneable with a configuration parameter too. The 
downside of this approach is that it will not allow us to remove old baked in 
IPAM code, and will have an impact on code maintainability which ultimately 
will result in accumulating even more technical debt. However, I might be 
missing some better alternative, so if you have any proposal just let me know.

Hmm. How dynamic is Python? I know in Ruby I could do something like this at 
class load time:

config.use_ipam ? DbBasePluginV2 = IpamDbBasePluginV2 : DbBasePluginV2 = 
LegacyDbBasePluginV2

and all the subclasses would work fine as before...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-13 Thread Evgeniy L
Andrew,

It looks like what you've described is already done for ssh keys [1].

[1] https://review.openstack.org/#/c/149543/

On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that can
 instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously should
 be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty uncomfortable 
 for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys
 for nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process
 of making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder
 /etc/fuel/keys, and then copy them with rsync task (but it feels 
 not very
 secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute
 on target nodes. It will require additional
 hook in astute, smth like 

Re: [openstack-dev] [nova] FFE Request: Proxy neutron configuration to guest instance

2015-02-13 Thread Jay Pipes

I'm happy to sponsor this.

On 02/12/2015 01:32 PM, Jay Faulkner wrote:

Hi Nova cores,

We’d like to request an FFE for this added nova feature. It gives a real
interface - a JSON file - to network data inside the instance. This is a
patch Rackspace carries downstream, and we’ve had lots of interested
users, including the OpenStack Infra team and upstream cloud-init. We’d
love to get this in for Kilo so all can benefit from the better interface.

There are a few small patches remaining to implement this functionality:
https://review.openstack.org/#/c/155116/ Updates the testing portion of
the spec to reflect we can’t tempest test this, and will instead add
functional tests to Nova for it.

*Core Functionality*
https://review.openstack.org/#/c/143755/ - Adds IPv6 support to Nova’s
network unit tests so we can test the functionality in IPv6.
https://review.openstack.org/#/c/102649/ - Builds and prepares the
neutron network data to expose
https://review.openstack.org/#/c/153097/ - Exposes the Neutron network
data built in the last patch to Configdrive/Metadata service

*VLAN Support*
As a note; while we’d like all these patches to be merged, it’s clear
the VLAN support is a bit more complex than the other patches, and we’d
be OK with the other patches receiving an FFE without this one (although
obviously we’d prefer get everything in K).

https://review.openstack.org/#/c/152703/ - Adds VLAN support for Neutron
network data generation.

Please let me or Josh know if you have any questions.

Thanks,
Jay Faulkner (JayF)  Josh Gachnang (JoshNang)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-13 Thread John Davidge (jodavidg)
Hi Ihar,

To answer your questions in order:

1. Yes, you are understanding the intention correctly. Dibbler doesn¹t
currently support client restart, as doing so causes all existing
delegated prefixes to be released back to the PD server. All subnets
belonging to the router would potentially receive a new cidr every time a
subnet is added/removed.

2. Option 2 cannot be implemented using the current version of dibbler,
but it can be done using the version we have modified. Option 3 could
possibly be done with the current version of dibbler, but with some major
limitations - only one single router namespace would be supported.

Once the dibbler changes linked below are reviewed and finalised we will
only need to merge a single patch into the upstream dibbler repo. No
further patches are anticipated.

Yes, you are correct that dibbler is not needed unless prefix delegation
is enabled by the deployer. It is intended as an optional feature that can
be easily disabled (and probably will be by default). A test to check for
the correct dibbler version would certainly be necessary.

Testing in the gate will be an issue until the new version of dibbler is
merged and packaged in the various distros. I¹m not sure if there is a way
to avoid this problem, unless we have devstack install from our updated
repo while we wait.

John Davidge
OpenStack@Cisco




On 13/02/2015 16:01, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for the write-up! See inline.

On 02/13/2015 04:34 PM, Robert Li (baoli) wrote:
 Hi,
 
 while trying to integrate dibbler client with neutron to support
 PD, we countered a few issues with the dibbler client (and server).
 With a neutron router, we have the qg-xxx interface that is
 connected to the public network, on which a dhcp server is running
 on the delegating router. For each subnet with PD enabled, a router
 port will be created in the neutron router. As a result, a new PD
 request will be sent that asks for a prefix from the delegating
 router. Keep in mind that the subnet is added into the router
 dynamically.
 
 We thought about the following options:
 
 1. use a single dibbler client to support the above requirement.
 This means, the client should be able to accept new requests on the
 fly either through configuration reload or other interfaces.
 Unfortunately, dibbler client doesn¹t support it.

Sorry for my ignorance on PD implementation (I will definitely look at
it the next week), but what does this entry above mean? Do you want a
single dibbler instance running per router serving all subnets plugged
into it? And you want to get configuration updates when a new subnet
is plugged in, or removed from the router?

If that's the case, why not just restarting the client?

 2. start a dibbler client per subnet. All of the dibbler clients
 will be using the same outgoing interface (which is the qg-xxx
 interface). Unfortunately, dibbler client uses /etc/dibbler and
 /var/lib/dibbler for its state (in which it saves duid file, pid
 file, and other internal states). This means it can only support
 one client per network node. 3. run a single dibbler client that
 requests a smaller prefix (say /56) and splits it among the subnets
 with PD enabled (neutron subnet requires /64). Depending on the
 neutron router setup, this may result in significant waste of
 prefixes.

Just to understand all options at the table: can we implement ^^
option with stock dibbler?

 
 Given the significant drawback with 3, we are left with 1 and 2.
 After looking at the dibbler source code, we found that 2 is easier
 to achieve for now by making some small changes in the dibbler
 code. In the long run, we think option 1 is better.
 
 The changes we made to the linux dibbler client code, and the
 dibbler server code can be found in here:
 https://github.com/johndavidge/dibbler/tree/cloud-dibbler.
 Basically it does a few things: ‹ create a unique working area per
 dibbler client ‹ since all the clients use the same outgoing
 interface, we¹d like each dibbler client to use a unique LLA as its
 source address when sending messages. This would avoid clients to
 receive server messages that are not intended for them. ‹ we found
 that dibbler server uses transaction ID alone to identify a match
 between a request and an answer. This would require that unique
 transaction IDs be used among all the existing clients. We found
 that clients could use the same transaction IDs in our
 environment. Therefore, a little change is made in the server code
 so that it will take the request sender into consideration while
 looking up a match.
 
 
 Option 1 requires better understanding of the dibbler code, and we
 think that it may not be possible to make it happen in the kilo
 timeframe. But we think it has significant advantages over option
 2. Regardless, changes made for 2 is also needed since we need to
 run one dibbler client per neutron router.
 
 Now the issue is how to make 

Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-13 Thread Morgan Fainberg
Based upon the feedback from this thread, I want to welcome Marek as the newest 
member of keystone core.

Cheers,
Morgan
-- 
Morgan Fainberg

On February 10, 2015 at 9:51:16 AM, Morgan Fainberg (morgan.fainb...@gmail.com) 
wrote:

Hi everyone!

I wanted to propose Marek Denis (marekd on IRC) as a new member of the Keystone 
Core team. Marek has been instrumental in the implementation of Federated 
Identity. His work on Keystone and first hand knowledge of the issues with 
extremely large OpenStack deployments has been a significant asset to the 
development team. Not only is Marek a strong developer working on key features 
being introduced to Keystone but has continued to set a high bar for any code 
being introduced / proposed against Keystone. I know that the entire team 
really values Marek’s opinion on what is going in to Keystone.

Please respond with a +1 or -1 for adding Marek to the Keystone core team. This 
poll will remain open until Feb 13.

-- 
Morgan Fainberg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Rebase button enabled for all Gerrit users

2015-02-13 Thread Ben Nemec
On 02/13/2015 11:42 AM, Jeremy Stanley wrote:
 For a few months, some project core teams (including Nova's) have
 been running with an ACL granting access to the rebase button in
 Gerrit for all the projects they manage, a permission usually only
 exposed to the owner of an individual change. This has been
 generally useful for them, especially when someone updates a commit
 message via the Gerrit WebUI and there are changes depending on that
 one which then show as outdated. So far they've seen no real
 drawbacks, and since it's already possible for any Gerrit user to
 locally rebase and push that to a change as an updated patchset
 anyway we've deemed it generally safe to go ahead and expose that
 button to everyone.
 
 If you encounter any unexpected issues you think might be related to
 this behavior change, please let me or someone else in the Infra
 team know about it. Thanks!
 

Nice, thanks.  I had actually stopped updating commit messages inline on
patch series because of the rebase issue.  Sounds like this solves that
problem.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova]

2015-02-13 Thread Alexander Makarov
Adam, Nova client does it for some reason during a call to
nova.servers.list()


On Thu, Feb 12, 2015 at 10:03 PM, Adam Young ayo...@redhat.com wrote:

  On 02/12/2015 10:40 AM, Alexander Makarov wrote:

 A trust token cannot be used to get another token:

 https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L154-L156
 You have to make your Nova client use the very same trust scoped token
 obtained from authentication using trust without trying to authenticate
 with it one more time.



 Actually, there have been some recent changes to allow re-delegation of
 Trusts, but for older deployments, you are correct.  I hadn't seen anywhere
 here that he was trying to use a trust token to get another token, though.



 On Wed, Feb 11, 2015 at 9:10 PM, Adam Young ayo...@redhat.com wrote:

  On 02/11/2015 12:16 PM, Nikolay Makhotkin wrote:

 No, I just checked it. Nova receives trust token and raise this error.

  In my script, I see:

  http://paste.openstack.org/show/171452/

  And as you can see, token from trust differs from direct user's token.


  The original user needs to have the appropriate role to perform the
 operation on the specified project.  I see the admin role is created on the
 trust. If the trustor did not have that role, the trustee would not be able
 to exececute the trust and get a token.  It looks like you were able to
 execute the trust and get a token,  but I would like you to confirm that,
 and not just trust the keystone client:  either put debug statements in
 Keystone or call the POST to tokens from curl with the appropriate options
 to get a trust token.  In short, make sure you have not fooled yourself.
 You can also look in the token table inside Keystone to see the data for
 the trust token, or validate the token  via curl to see the data in it.  In
 all cases, there should be an OS-TRUST stanza in the token data.


 If it is still failing, there might be some issue on the Policy side.  I
 have been assuming that you are running with the default policy for Nova.

 http://git.openstack.org/cgit/openstack/nova/tree/etc/nova/policy.json

 I'm not sure which rule matches for list servers (Nova developer input
 would be appreciated)  but I'm guessing it is executing the rule

 admin_or_owner: is_admin:True or project_id:%(project_id)s,

 Since that is the default. I am guessing that the project_id in question
 comes from the token here, as that seems to be common, but if not, it might
 be that the two values are mismatched. Perhaps there Proejct ID value from
 the client env var is sent, and matches what the trustor normally works as,
 not the project in question.  If these two values don't match, then, yes,
 the rule would fail.




 On Wed, Feb 11, 2015 at 7:55 PM, Adam Young ayo...@redhat.com wrote:

   On 02/11/2015 10:52 AM, Nikolay Makhotkin wrote:

 Hi !

  I investigated trust's use cases and encountered the problem: When I
 use auth_token obtained from keystoneclient using trust, I get *403*
 Forbidden error:  *You are not authorized to perform the requested
 action.*

  Steps to reproduce:

  - Import v3 keystoneclient (used keystone and keystoneclient from
 master, tried also to use stable/icehouse)
 - Import v3 novaclient
 - initialize the keystoneclient:
   keystone = keystoneclient.Client(username=username,
 password=password, tenant_name=tenant_name, auth_url=auth_url)

  - create a trust:
   trust = keystone.trusts.create(
 keystone.user_id,
 keystone.user_id,
 impersonation=True,
 role_names=['admin'],
 project=keystone.project_id
   )

  - initialize new keystoneclient:
client_from_trust = keystoneclient.Client(
 username=username, password=password,
 trust_id=trust.id, auth_url=auth_url,
   )

  - create nova client using new token from new client:
nova = novaclient.Client(
 auth_token=client_from_trust.auth_token,
 auth_url=auth_url_v2,
 project_id=from_trust.project_id,
 service_type='compute',
 username=None,
 api_key=None
   )

  - do simple request to nova:
   nova.servers.list()

  - get the error described above.


 Maybe I misunderstood something but what is wrong? I supposed I just can
 work with nova like it was initialized using direct token.


  From what you wrote here it should work, but since Heat has been doing
 stuff like this for a while, I'm pretty sure it is your setup and not a
 fundamental problem.

 I'd take a look at what is going back and forth on the wire and make
 sure the right token is being sent to Nova.  If it is the original users
 token and not the trust token, then you would see that error.


  --
  Best Regards,
 Nikolay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 

Re: [openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-13 Thread Kyle Mestery
On Fri, Feb 13, 2015 at 10:57 AM, John Davidge (jodavidg) 
jodav...@cisco.com wrote:

 Hi Ihar,

 To answer your questions in order:

 1. Yes, you are understanding the intention correctly. Dibbler doesn¹t
 currently support client restart, as doing so causes all existing
 delegated prefixes to be released back to the PD server. All subnets
 belonging to the router would potentially receive a new cidr every time a
 subnet is added/removed.

 2. Option 2 cannot be implemented using the current version of dibbler,
 but it can be done using the version we have modified. Option 3 could
 possibly be done with the current version of dibbler, but with some major
 limitations - only one single router namespace would be supported.

 Once the dibbler changes linked below are reviewed and finalised we will
 only need to merge a single patch into the upstream dibbler repo. No
 further patches are anticipated.

 Yes, you are correct that dibbler is not needed unless prefix delegation
 is enabled by the deployer. It is intended as an optional feature that can
 be easily disabled (and probably will be by default). A test to check for
 the correct dibbler version would certainly be necessary.

 Testing in the gate will be an issue until the new version of dibbler is
 merged and packaged in the various distros. I¹m not sure if there is a way
 to avoid this problem, unless we have devstack install from our updated
 repo while we wait.

 To me, this seems like a pretty huge problem. We can't expect
distributions to package side-changes to upstream projects. The correct way
to solve this problem is to work to get the changes required in the
dependent packages upstream into those projects first (dibbler, in this
case), and then propose the changes into Neutron to make use of those
changes. I don't see how we can proceed with this work until the issues
around dibbler has been resolved.


 John Davidge
 OpenStack@Cisco




 On 13/02/2015 16:01, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Thanks for the write-up! See inline.
 
 On 02/13/2015 04:34 PM, Robert Li (baoli) wrote:
  Hi,
 
  while trying to integrate dibbler client with neutron to support
  PD, we countered a few issues with the dibbler client (and server).
  With a neutron router, we have the qg-xxx interface that is
  connected to the public network, on which a dhcp server is running
  on the delegating router. For each subnet with PD enabled, a router
  port will be created in the neutron router. As a result, a new PD
  request will be sent that asks for a prefix from the delegating
  router. Keep in mind that the subnet is added into the router
  dynamically.
 
  We thought about the following options:
 
  1. use a single dibbler client to support the above requirement.
  This means, the client should be able to accept new requests on the
  fly either through configuration reload or other interfaces.
  Unfortunately, dibbler client doesn¹t support it.
 
 Sorry for my ignorance on PD implementation (I will definitely look at
 it the next week), but what does this entry above mean? Do you want a
 single dibbler instance running per router serving all subnets plugged
 into it? And you want to get configuration updates when a new subnet
 is plugged in, or removed from the router?
 
 If that's the case, why not just restarting the client?
 
  2. start a dibbler client per subnet. All of the dibbler clients
  will be using the same outgoing interface (which is the qg-xxx
  interface). Unfortunately, dibbler client uses /etc/dibbler and
  /var/lib/dibbler for its state (in which it saves duid file, pid
  file, and other internal states). This means it can only support
  one client per network node. 3. run a single dibbler client that
  requests a smaller prefix (say /56) and splits it among the subnets
  with PD enabled (neutron subnet requires /64). Depending on the
  neutron router setup, this may result in significant waste of
  prefixes.
 
 Just to understand all options at the table: can we implement ^^
 option with stock dibbler?
 
 
  Given the significant drawback with 3, we are left with 1 and 2.
  After looking at the dibbler source code, we found that 2 is easier
  to achieve for now by making some small changes in the dibbler
  code. In the long run, we think option 1 is better.
 
  The changes we made to the linux dibbler client code, and the
  dibbler server code can be found in here:
  https://github.com/johndavidge/dibbler/tree/cloud-dibbler.
  Basically it does a few things: ‹ create a unique working area per
  dibbler client ‹ since all the clients use the same outgoing
  interface, we¹d like each dibbler client to use a unique LLA as its
  source address when sending messages. This would avoid clients to
  receive server messages that are not intended for them. ‹ we found
  that dibbler server uses transaction ID alone to identify a match
  between a request and an answer. This would require that 

Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
 -Original Message-
 From: James E. Blair [mailto:cor...@inaugust.com]
 Sent: 13 February 2015 16:44
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals
 from review
 
 Kuvaja, Erno kuv...@hp.com writes:
 
  Hi all,
 
  We have almost year old (from last update) reviews still in the queue
  for glance. The discussion was initiated on yesterday's meeting for
  adopting abandon policy for stale changes.
 
 Hi,
 
 Abandoning changes submitted by other people is not a good experience for
 people who are contributing to OpenStack, but fortunately, it is not
 necessary.
 
 Our current version of Gerrit supports a rich syntax for searching, which you
 can use to create personal or project dashboards.  It is quite easy to filter 
 out
 changes that appear old or inactive, without the negative experience of
 having them abandoned.
 
 Many projects, including all of the infra projects (which see a substantial
 number of changes) are able to function without automatically abandoning
 changes.
 
 If you could identify why you feel the need to abandon other peoples
 changes, I'm sure we can find a resolution.
 
 -Jim

Hi Jim,

I think you hit spot on here. It's extremely difficult to automate anything 
like this being smart and flexible. ;)

- Erno
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-13 Thread Tim Hinrichs
Hi Debo and Yathi,

We’re completely on the same page here.  We’ve known about the solver-scheduler 
for a while now (I even attended your Atlanta talk), and I always expected 
Congress would integrate with it.  As you say, now it’s a matter of getting 
down to the details.

A bit on the context…  The current problem we’re working on in Congress is how 
we might delegate responsibility for policy enforcement to domain-specific 
policy engines, and a number of people were interested in integrating with a 
VM-placement engine.  We started looking at the solver-scheduler (the obvious 
first choice), hence this dialog.  The notes in the google doc are just me 
trying to understand the problem of delegation to a VM-placement engine by 
working through the problem end-to-end.  (I’ve not worked with LP or 
VM-placement much, so my notes are there to help me grapple a bit with the 
domain for the first time.)  How we build a PoC is something we haven’t started 
to discuss.  So you’re joining the discussion at the right time.  The more of 
that PoC we can build by leveraging solver-scheduler, the better.

More detailed comments inline.


On Feb 13, 2015, at 5:05 AM, Debojyoti Dutta 
ddu...@gmail.commailto:ddu...@gmail.com wrote:

Tim

Wanted to clarify a bit. As I have mentioned before: Solver scheduler is work 
done before this work (Datalog-constraints) but we had kept it very 
generic to be integrated with something like congress. In fact Ramki (who was 
one of the members of the original thread when you reached out to us) joined us 
to in talk in Atlanta where we described some of the same use cases using PULP 
 congress was still ramping up then. We were not aware of the 
Datalog-constraints work that you guys were doing, else we would have joined 
hands before.

The question is this: going forward, how do build this cool stuff together in 
the community? I am hoping the scheduler folks will be very excited too!

debo

On Thu, Feb 12, 2015 at 11:27 AM, Yathiraj Udupi (yudupi) 
yud...@cisco.commailto:yud...@cisco.com wrote:
Hi Tim,

Thanks for your response.  Excited too to extend the collaboration and ensure 
there is no need to duplicate effort in the open source community.
 My responses inline.

1)  Choice of LP solver.

I see solver-scheduler uses Pulp, which was on the Congress short list as well. 
 So we’re highly aligned on the choice of underlying solver.

YATHI - This makes me wonder why can’t we easily adapt the solver-scheduler to 
your needs, rather than duplicating the effort!


My primary goal is to build an architecture that makes it easy to integrate 
with domain-specific policy engines (like compute or networking).

What I’m also hearing is that people are interested in building *new* 
domain-specific policy engines within the Congress framework and/or expanding 
the functionality of the Congress policy engine itself to include optimization 
technology.  In both cases, we would need a library for solving optimization 
problems.  Oliver (CC’ed) has proposed adding such a library to Congress.  
Solver-scheduler already has such a library, so it would be great if we could 
all brainstorm about how to make optimization technology easy to use for people 
writing domain-specific policy engines, without reinventing the wheel.

https://blueprints.launchpad.net/congress/+spec/rule-x


2) User control over VM-placement.


To choose the criteria for VM-placement, the solver-scheduler user picks from a 
list of predefined options, e.g. ActiveHostConstraint, 
MaxRamAllocationPerHostConstraint.

We’re investigating a slightly different approach, where the user defines the 
criteria for VM-placement by writing any policy they like in Datalog.  Under 
the hood we then convert that Datalog to an LP problem.  From the developer’s 
perspective, with the Congress approach we don’t attempt to anticipate the 
different policies the user might want and write code for each policy; instead, 
we as developers write a translator from Datalog to LP.  From the user’s 
perspective, the difference is that if the option they want isn’t on the 
solver-scheduler's list, they’re out of luck or need to write the code 
themselves.  But with the Congress approach, they can write any VM-placement 
policy they like.

What I’d like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they’ve 
written is on the solver-scheduler’s list of options, we use the hard-coded 
implementation, but if the policy isn’t on that list we translate directly to 
LP.  This approach gives us the ability to write custom code to handle common 
cases while at the same time letting users write whatever policy they like.


YATHI -  The idea of providing some default constraint classes in Solver 
Scheduler was to enable easy pluggability for various placement policy 
scenarios.  We can easily add a custom constraint class in solver scheduler, 
that enables adding 

[openstack-dev] [Infra] Rebase button enabled for all Gerrit users

2015-02-13 Thread Jeremy Stanley
For a few months, some project core teams (including Nova's) have
been running with an ACL granting access to the rebase button in
Gerrit for all the projects they manage, a permission usually only
exposed to the owner of an individual change. This has been
generally useful for them, especially when someone updates a commit
message via the Gerrit WebUI and there are changes depending on that
one which then show as outdated. So far they've seen no real
drawbacks, and since it's already possible for any Gerrit user to
locally rebase and push that to a change as an updated patchset
anyway we've deemed it generally safe to go ahead and expose that
button to everyone.

If you encounter any unexpected issues you think might be related to
this behavior change, please let me or someone else in the Infra
team know about it. Thanks!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-13 Thread Marek Denis
Thank you, everyone! :)

Dnia 13 lutego 2015 18:35:09 CET, Morgan Fainberg morgan.fainb...@gmail.com 
napisał(a):
Based upon the feedback from this thread, I want to welcome Marek as
the newest member of keystone core.

Cheers,
Morgan
-- 
Morgan Fainberg

On February 10, 2015 at 9:51:16 AM, Morgan Fainberg
(morgan.fainb...@gmail.com) wrote:

Hi everyone!

I wanted to propose Marek Denis (marekd on IRC) as a new member of the
Keystone Core team. Marek has been instrumental in the implementation
of Federated Identity. His work on Keystone and first hand knowledge of
the issues with extremely large OpenStack deployments has been a
significant asset to the development team. Not only is Marek a strong
developer working on key features being introduced to Keystone but has
continued to set a high bar for any code being introduced / proposed
against Keystone. I know that the entire team really values Marek’s
opinion on what is going in to Keystone.

Please respond with a +1 or -1 for adding Marek to the Keystone core
team. This poll will remain open until Feb 13.

-- 
Morgan Fainberg




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Marek Denis
[marek.de...@cern.ch]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Feature Freeze Exception Request - bp/linux-systemz

2015-02-13 Thread Mike Perez
On 23:30 Mon 09 Feb , Jay S. Bryant wrote:
 Mike,
 
 A FFE for this has been submitted to Nova and is being sponsored by
 Matt Riedemann:  [1]
 
 Assuming that goes through soon, can we please re-address?
 
 Thanks!
 Jay
 
 [1] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg45430.html

Yes.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-13 Thread Nikola Đipanov
On 02/12/2015 04:10 PM, Chris Friesen wrote:
 On 02/12/2015 03:44 AM, Sylvain Bauza wrote:
 
 Any action done by the operator is always more important than what the
 Scheduler
 could decide. So, in an emergency situation, the operator wants to
 force a
 migration to an host, we need to accept it and do it, even if it
 doesn't match
 what the Scheduler could decide (and could violate any policy)

 That's a *force* action, so please leave the operator decide.
 
 Are we suggesting that the operator would/should only ever specify a
 specific host if the situation is an emergency?
 
 If not, then perhaps it would make sense to have it go through the
 scheduler filters even if a host is specified.  We could then have a
 --force flag that would proceed anyways even if the filters don't match.
 
 There are some cases (provider networks or PCI passthrough for example)
 where it really makes no sense to try and run an instance on a compute
 node that wouldn't pass the scheduler filters.  Maybe it would make the
 most sense to specify a list of which filters to override while still
 using the others.
 

Actually this kind of already happens on the compute node when doing
claims. Even if we do force the host, the claim will fail on the compute
node and we will end up with a consistent scheduling.

This sadly breaks down for stuff that needs to use limits, as limits
won't be set by the filters.

Jay had a BP before to move limits onto compute nodes, which would solve
this issue, as you would not need to run the filters at all - all the
stuff would be known to the compute host that could then easily say
nice of you to want this here, but it ain't happening.

It will also likely need a check in the retry logic to make sure we
don't hit the host 'retry' number of times.

N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-13 Thread Rodrigo Duarte
Congrats Marek, well deserved!

On Fri, Feb 13, 2015 at 2:35 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Based upon the feedback from this thread, I want to welcome Marek as the
 newest member of keystone core.

 Cheers,
 Morgan
 --
 Morgan Fainberg

 On February 10, 2015 at 9:51:16 AM, Morgan Fainberg (
 morgan.fainb...@gmail.com) wrote:

  Hi everyone!

  I wanted to propose Marek Denis (marekd on IRC) as a new member of the
 Keystone Core team. Marek has been instrumental in the implementation of
 Federated Identity. His work on Keystone and first hand knowledge of the
 issues with extremely large OpenStack deployments has been a significant
 asset to the development team. Not only is Marek a strong developer working
 on key features being introduced to Keystone but has continued to set a
 high bar for any code being introduced / proposed against Keystone. I know
 that the entire team really values Marek’s opinion on what is going in to
 Keystone.

  Please respond with a +1 or -1 for adding Marek to the Keystone core
 team. This poll will remain open until Feb 13.

  --
 Morgan Fainberg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rodrigo Duarte Sousa
Senior Software Engineer at Advanced OpenStack Brazil
Distributed Systems Laboratory
MSc in Computer Science
Federal University of Campina Grande
Campina Grande, PB - Brazil
http://rodrigods.com http://lsd.ufcg.edu.br/%7Erodrigods
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Rebase button enabled for all Gerrit users

2015-02-13 Thread Morgan Fainberg


 On Feb 13, 2015, at 09:42, Jeremy Stanley fu...@yuggoth.org wrote:
 
 For a few months, some project core teams (including Nova's) have
 been running with an ACL granting access to the rebase button in
 Gerrit for all the projects they manage, a permission usually only
 exposed to the owner of an individual change. This has been
 generally useful for them, especially when someone updates a commit
 message via the Gerrit WebUI and there are changes depending on that
 one which then show as outdated. So far they've seen no real
 drawbacks, and since it's already possible for any Gerrit user to
 locally rebase and push that to a change as an updated patchset
 anyway we've deemed it generally safe to go ahead and expose that
 button to everyone.
 
 If you encounter any unexpected issues you think might be related to
 this behavior change, please let me or someone else in the Infra
 team know about it. Thanks!
 -- 
 Jeremy Stanley
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Great change! Thanks! This will make managing changesets better overall. 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Testing NUMA, CPU pinning and large pages

2015-02-13 Thread Hoban, Adrian
 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: Wednesday, February 11, 2015 8:49 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Znoinski, Waldemar
 Subject: Re: [openstack-dev] Testing NUMA, CPU pinning and large pages
 
 - Original Message -
  From: Adrian Hoban adrian.ho...@intel.com
 
  Hi Folks,
 
  I just wanted to share some details on the Intel CI testing strategy for 
  NFV.
 
  You will see two Intel CIs commenting:
  #1: Intel-PCI-CI
  - Yongli He and Shane Wang are leading this effort for us.
  - The focus in this environment is on PCIe and SR-IOV specific testing.
  - Commenting back to review.openstack.org has started.
 
 With regards to SR-IOV / PCI specifically it seemed based on
 https://review.openstack.org/#/c/139000/ and
 https://review.openstack.org/#/c/141270/ that there was still some
 confusion as to where the tests should actually live (and I expect the same is
 true for the NUMA, Large Pages, etc. tests). Is this resolved or are there 
 still
 open questions?
 
 Thanks,
 
 Steve
 

Hi Steve,

The PCIe test code is being put on github at: 
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases
 
We would readily welcome some feedback if folks think it should go elsewhere, 
but for now the tests are publically available and we can continue to make 
progress.

Regards,
Adrian

 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] BPs targeted to Kilo-3 without any code submitted

2015-02-13 Thread Kyle Mestery
We have a approximately 10 BPs for Kilo-3 [1] which do not have any code
proposed for review yet. If you're assigned to a BP in this category, I
encourage you to work to submit your code in the coming week. Waiting until
the Feature Proposal Freeze (FPF) on March 5 [2] to propose your code will
put extra stress not only on reviewers but also on you as the submitter.
Getting your code pushed out earlier, even if it's WIP, will allow people
to have a look at it sooner and help in the iterative review process.

Thanks!
Kyle

[1] https://launchpad.net/neutron/+milestone/kilo-3
[2] https://wiki.openstack.org/wiki/Kilo_Release_Schedule
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Louis Taylor
Erno Kuvaja wrote:
 We have almost year old (from last update) reviews still in the queue
 for glance. The discussion was initiated on yesterday's meeting for
 adopting abandon policy for stale changes.

I'm okay with abandoning old some old reviews which are obviously going
nowhere, such as ones superseded by other fixes or not deemed necessary by
anyone (and should probably have been abandoned by the author). I'm not
convinced about this being a commonplace action for reviews which are currently
inactive.

James E. Blair wrote:
 Abandoning changes submitted by other people is not a good experience
 for people who are contributing to OpenStack, but fortunately, it is not
 necessary.
 
 Our current version of Gerrit supports a rich syntax for searching,
 which you can use to create personal or project dashboards.  It is quite
 easy to filter out changes that appear old or inactive, without the
 negative experience of having them abandoned.
 
 Many projects, including all of the infra projects (which see a
 substantial number of changes) are able to function without
 automatically abandoning changes.
 
 If you could identify why you feel the need to abandon other peoples
 changes, I'm sure we can find a resolution.

I agree with this. I made (actually hacked up Ironic's. Thanks Ironic!) a
dashboard for glance [1], which is what I use during reviewing. This hides a
lot of the reviews which look stale, and is fairly good at getting an overview
of the reviews which require attention.

Using or editing a dashboard for your own daily use removes the need to abandon
the majority of the changes this proposal suggests.

Louis

[1] http://goo.gl/eS05pD


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jay Pipes

On 02/13/2015 11:55 AM, Jordan Pittier wrote:

Jay, I am afraid I didn't understand your point.

Could you rephrase/elaborate on What is the difference between just
calling the Glance API to upload an image, versus adding add() please ?
Currently, you can't call the Glance API to upload an image if the
default_store is the HTTP store.


No, you upload the image to a Glance server that has a backing data 
store like filesystem or swift. But the process of doing that (i.e. 
calling `glance image upload`) is the same as what you are describing -- 
it's all just POST'ing some data via HTTP through the Glance API endpoint.


So, I don't understand what allowing the HTTP backend to support add() 
gives the user of Glance.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] tagging guideline up for review

2015-02-13 Thread Miguel Grinberg
Hi all,

I would like to invite you to review my proposal on tagging guidelines for
the API-WG. The proposal is heavily based on the recent nova tagging spec,
but I decided to deviate from it in a couple of places (I noted in the
document my reasons).

Feedback welcome.

Thanks,

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] tagging guideline up for review

2015-02-13 Thread Miguel Grinberg
I'm sure it would be helpful if I give you the link to the document :)

https://review.openstack.org/#/c/155620/

On Fri, Feb 13, 2015 at 11:01 AM, Miguel Grinberg 
miguel.s.grinb...@gmail.com wrote:

 Hi all,

 I would like to invite you to review my proposal on tagging guidelines for
 the API-WG. The proposal is heavily based on the recent nova tagging spec,
 but I decided to deviate from it in a couple of places (I noted in the
 document my reasons).

 Feedback welcome.

 Thanks,

 Miguel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-13 Thread Andrew Woodward
Cool, You guys read my mind o.O

RE: the review. We need to avoid copying the secrets to nodes that don't
require them. I think it might be too soon to be able to make granular
tasks based for this, but we need to move that way.

Also, how are the astute tasks read into the environment? Same as with the
others?

 fuel rel --sync-deployment-tasks


On Fri, Feb 13, 2015 at 7:32 AM, Evgeniy L e...@mirantis.com wrote:

 Andrew,

 It looks like what you've described is already done for ssh keys [1].

 [1] https://review.openstack.org/#/c/149543/

 On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com
 wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that can
 instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously
 should be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
  wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need
 to separate data sources from the way we manipulate it. Thus, sources may
 be: 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of
 Google DNS Servers. Then all this data is aggregated and transformed
 somehow. After that it is shipped to the deployment layer. That's how I 
 see
 it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty uncomfortable 
 for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys
 for nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by 

Re: [openstack-dev] Testing NUMA, CPU pinning and large pages

2015-02-13 Thread Russell Bryant
On 02/13/2015 01:02 PM, Hoban, Adrian wrote:
 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: Wednesday, February 11, 2015 8:49 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Znoinski, Waldemar
 Subject: Re: [openstack-dev] Testing NUMA, CPU pinning and large pages

 - Original Message -
 From: Adrian Hoban adrian.ho...@intel.com

 Hi Folks,

 I just wanted to share some details on the Intel CI testing strategy for 
 NFV.

 You will see two Intel CIs commenting:
 #1: Intel-PCI-CI
 - Yongli He and Shane Wang are leading this effort for us.
 - The focus in this environment is on PCIe and SR-IOV specific testing.
 - Commenting back to review.openstack.org has started.

 With regards to SR-IOV / PCI specifically it seemed based on
 https://review.openstack.org/#/c/139000/ and
 https://review.openstack.org/#/c/141270/ that there was still some
 confusion as to where the tests should actually live (and I expect the same 
 is
 true for the NUMA, Large Pages, etc. tests). Is this resolved or are there 
 still
 open questions?

 Thanks,

 Steve

 
 Hi Steve,
 
 The PCIe test code is being put on github at: 
 https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases
  
 We would readily welcome some feedback if folks think it should go elsewhere, 
 but for now the tests are publically available and we can continue to make 
 progress.

github seems like a fine choice.  The other option would be to create a
stackforge repository, which would let people contribute to it through
gerrit just like any other openstack or stackforge repo.

http://ci.openstack.org/stackforge.html

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-13 Thread Yathiraj Udupi (yudupi)
Hi Tim,

Glad to collaborate and work towards nailing down the details.  Yeah in terms 
of policy enforcement from Congress, it makes sense to delegate to  
domain-specific policy engines.  It will be good to go through this PoC and to 
start thinking about the integration points of Congress with Solver scheduler, 
with a good set of APIs supported from both sides.

Some more comments inline to your questions:

On 2/13/15, 9:35 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:

Hi Debo and Yathi,

We’re completely on the same page here.  We’ve known about the solver-scheduler 
for a while now (I even attended your Atlanta talk), and I always expected 
Congress would integrate with it.  As you say, now it’s a matter of getting 
down to the details.

A bit on the context…  The current problem we’re working on in Congress is how 
we might delegate responsibility for policy enforcement to domain-specific 
policy engines, and a number of people were interested in integrating with a 
VM-placement engine.  We started looking at the solver-scheduler (the obvious 
first choice), hence this dialog.  The notes in the google doc are just me 
trying to understand the problem of delegation to a VM-placement engine by 
working through the problem end-to-end.  (I’ve not worked with LP or 
VM-placement much, so my notes are there to help me grapple a bit with the 
domain for the first time.)  How we build a PoC is something we haven’t started 
to discuss.  So you’re joining the discussion at the right time.  The more of 
that PoC we can build by leveraging solver-scheduler, the better.

More detailed comments inline.


On Feb 13, 2015, at 5:05 AM, Debojyoti Dutta 
ddu...@gmail.commailto:ddu...@gmail.com wrote:

Tim

Wanted to clarify a bit. As I have mentioned before: Solver scheduler is work 
done before this work (Datalog-constraints) but we had kept it very 
generic to be integrated with something like congress. In fact Ramki (who was 
one of the members of the original thread when you reached out to us) joined us 
to in talk in Atlanta where we described some of the same use cases using PULP 
 congress was still ramping up then. We were not aware of the 
Datalog-constraints work that you guys were doing, else we would have joined 
hands before.

The question is this: going forward, how do build this cool stuff together in 
the community? I am hoping the scheduler folks will be very excited too!

debo

On Thu, Feb 12, 2015 at 11:27 AM, Yathiraj Udupi (yudupi) 
yud...@cisco.commailto:yud...@cisco.com wrote:
Hi Tim,

Thanks for your response.  Excited too to extend the collaboration and ensure 
there is no need to duplicate effort in the open source community.
 My responses inline.

1)  Choice of LP solver.

I see solver-scheduler uses Pulp, which was on the Congress short list as well. 
 So we’re highly aligned on the choice of underlying solver.

YATHI - This makes me wonder why can’t we easily adapt the solver-scheduler to 
your needs, rather than duplicating the effort!


My primary goal is to build an architecture that makes it easy to integrate 
with domain-specific policy engines (like compute or networking).

What I’m also hearing is that people are interested in building *new* 
domain-specific policy engines within the Congress framework and/or expanding 
the functionality of the Congress policy engine itself to include optimization 
technology.  In both cases, we would need a library for solving optimization 
problems.  Oliver (CC’ed) has proposed adding such a library to Congress.  
Solver-scheduler already has such a library, so it would be great if we could 
all brainstorm about how to make optimization technology easy to use for people 
writing domain-specific policy engines, without reinventing the wheel.

https://blueprints.launchpad.net/congress/+spec/rule-x

YATHI: It is an interesting thought in this direction.  However it is good to 
build an architecture with easy integration of Congress to separate 
domain-specific engines in compute, networking, and storage.I feel some of 
these policy validation/enforcement workflows are sometimes best within the 
domain-specific engines like VM placement/scheduling for example.   But 
definitely optimization technology is a good choice for this kind of problems.



2) User control over VM-placement.


To choose the criteria for VM-placement, the solver-scheduler user picks from a 
list of predefined options, e.g. ActiveHostConstraint, 
MaxRamAllocationPerHostConstraint.

We’re investigating a slightly different approach, where the user defines the 
criteria for VM-placement by writing any policy they like in Datalog.  Under 
the hood we then convert that Datalog to an LP problem.  From the developer’s 
perspective, with the Congress approach we don’t attempt to anticipate the 
different policies the user might want and write code for each policy; instead, 
we as developers write a translator from Datalog to LP.  From the user’s 

[openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests

2015-02-13 Thread Joe Gordon
   1.
  A few months back we started the process to remove the tempest CLI
  tests from tempest [0]. Now that we have successfully pulled
novaclient CLI
  tests out of tempest, we have the process sorted out. We now
have a process
  that should be easy to follow for each project, in fact
keystoneclient has
  already begun as well [1].  As stated in [0], the goal is to completely
  remove CLI tests from tempest by the end of the cycle.


  [0]
  
http://lists.openstack.org/pipermail/openstack-dev/2014-October/048089.html
   [1] https://review.openstack.org/#/c/155543/


  *Steps*


   - Move unit tests from */tests/ to */tests/unit
  -
  
http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=3561772f8b0cfee746af53fa228375b2ec7dfd9d
   - Add OS_TEST_PATH to testr.conf
  -
  
http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=f197c64e05596fc59c8318813d4f69a88ac832fc
   - Copy over initial set of CLI tests from tempest/cli/ and add
   functional test tox endpoint. Use standard OpenStack environment variables
   to get keystone auth, so the tests can be run via 'source openrc  tox
   -efunctional'
  -
  
http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=b89da9be28172319a16bece42f068e2d7f359c67
  - At this point you should be able to run the tests against a cloud
   - Add client-dsvm-functional job definition using a post_test_hook
  -
  
http://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=c4093cd6d328a87ea9a2335ac2dd4d09a598bc8e
   - Add post_test_hook for functional tests in the client repo.
  -
  
http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=d11f960c58c523da7154b3311d6b37ec715392af
  - This patch can be tested out using the non-voting experimental job,
  just leave the comment 'check experimental'
   - Make *client-dsvm-functional job gating for client
  -
  
http://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=147f20f5003cfa4f15a372f7d16493c3bb40775b
  - At this point you should have a working gating functional test with
  a few tests.
   - Copy in the rest of the tempest CLI tests
  -
  
http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=27cd393028a103d8d52cf25f035e3a2985572ccb
  - Unlike the first set of tests that were copied this is self gating.
   - Remove tempest CLI tests for your client
  -
  
http://git.openstack.org/cgit/openstack/tempest/commit/?id=0bd0adecd13e1285d0e938065280816395dbb415


   1.
   2.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] using one Manila service for two clouds

2015-02-13 Thread Jake Kugel
Hi,

this might be a dumb question, is it possible to have a stand-alone Manila 
service that could be used by clients outside of a specific OpenStack 
cloud?  For example, a shared Manila service that VMs in two clouds could 
both use?

I am guessing that there would be two drawbacks to this scenario -- (1) 
users would need two keystone credentials - a keystone credential in the 
cloud hosting their VM, and then a keystone credential that is used with 
the stand-alone Manila service to create a share.  And (2), the shared 
Manila service wouldn't be able to isolate network traffic for a 
particular tenant - all users of the service would share the same network. 
 Do these capture the problems with it?

Thanks,
Jake


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-13 Thread Lance Bragstad
Hello all,


I'm proposing the Authenticated Encryption (AE) Token specification [1] as
an SPFE. AE tokens increases scalability of Keystone by removing token
persistence. This provider has been discussed prior to, and at the Paris
summit [2]. There is an implementation that is currently up for review [3],
that was built off a POC. Based on the POC, there has been some performance
analysis done with respect to the token formats available in Keystone
(UUID, PKI, PKIZ, AE) [4].

The Keystone team spent some time discussing limitations of the current POC
implementation at the mid-cycle. One case that still needs to be addressed
(and is currently being worked), is federated tokens. When requesting
unscoped federated tokens, the token contains unbound groups which would
need to be carried in the token. This case can be handled by AE tokens but
it would be possible for an unscoped federated AE token to exceed an
acceptable AE token length (i.e.  255 characters). Long story short, a
federation migration could be used to ensure federated AE tokens never
exceed a certain length.

Feel free to leave your comments on the AE Token spec.

Thanks!

Lance

[1] https://review.openstack.org/#/c/130050/
[2] https://etherpad.openstack.org/p/kilo-keystone-authorization
[3] https://review.openstack.org/#/c/145317/
[4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Feb 13 2015

2015-02-13 Thread Anne Gentle
 No, really. What's up? :) I'm off Monday so starting a 3-day weekend, but
will see you all next week.

__In review and merged this past week__

I'm super pleased with the way that people are responding to our
suggestions and conventions. We are averaging over 60 reviews a day, keep
up the good work. With 58 reviewers (and only 14 core) I think we're all
working really hard and I appreciate all the reviewers. Thank you for
reviews and attention to detail. I hope we can keep adding to core, perhaps
in specialty areas.

__High priority doc work__

I'd really like to see renewed effort around doc bugs. The
openstack-manuals project has way too many non-triaged bugs, likely from
DocImpact coming in at the last milestone release. I think in February we
need another bug triage day. I'll send a post with potential dates next
week, please join in as you can, especially if you had a DocImpact patch
that landed.

__Ongoing doc work__

The specialty teams are doing great at meeting regularly and reporting back
on the -docs mailing list.

We held the docs team meeting this week. Here are the minutes and logs.
http://eavesdrop.openstack.org/meetings/docteam/2015/docteam.2015-02-11-14.00.html

http://eavesdrop.openstack.org/meetings/docteam/2015/docteam.2015-02-11-14.00.log.html


__New incoming doc requests__

Last week I requested a plan for the Debian install guide to clean up the
issues that prevent a successful install. Still need to hear from someone
who wants to take that on so that the guide can continue to be published.

I'd like us to consider not publishing the Install Guides to /trunk/ from
no on. That decreases any need for troubleshooting when someone tries to
install kilo-2 (which can't be done from our install guide) and will
prevent any confusion while we work on incoming changes. I'll get a patch
out for review next week and see if that's a good direction for those
guides to give some breathing room for edits. Since you can always use
docs-drafts for ongoing edits, it seems like a good idea.

__Doc tools updates__

I just logged a bunch of wishlist bugs for the Sphinx template, tagged
'openstackdocstheme' -- feel free to pick up a few if you're hankering for
some feature additions to our up-and-coming content page refresh. See the
list here:
https://bugs.launchpad.net/openstack-manuals/+bugs?field.tag=openstackdocstheme

Be sure to spread the word about how people can get involved with the
refresh by sharing this article on Super User:
http://superuser.openstack.org/articles/how-you-can-help-refresh-openstack-s-documentation-site

Once this patch merges (https://review.openstack.org/#/c/152690/) we will
be able to test the new theme with the migrated user guide pages more
easily.

__Other doc news__

I sent the following to Steve Gordon to represent as a roadmap for docs for
the Product Working Group: https://etherpad.openstack.org/p/klmdocplans
basically outlining what I'd like to see in Kilo, Liberty, and beyond. Feel
free to respond here or follow up if you have any questions.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-13 Thread Dolph Mathews
Big +1 from me if we can land something solid.

On Fri, Feb 13, 2015 at 3:12 PM, Yee, Guang guang@hp.com wrote:

  ++



 As for the unbound groups concern, our initial internal Federation POCs
 worked well with a single group so far. The proposed hierarchical role
 groups, or perhaps even supporting nested user groups down the road should
 offer us more flexibility in terms user and permission management. For
 example, having a single aggregated group to map to for the federated users.



 Personally, I think the max 255 characters constraint is somewhat
 artificial, unless I am missing something here.


It's a limitation made for social reasons: that's sort of the tipping point
where tokens become unwieldy and require special treatment.






 Guang





 *From:* Brant Knudson [mailto:b...@acm.org]
 *Sent:* Friday, February 13, 2015 12:59 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption
 (AE) Tokens





 We get a lot of complaints about problems caused by persistent tokens, so
 this would be great to see in K. Given the amount of work required to get
 it done, which includes taking care of some other issues, like getting
 revocation events working and refactoring the token code (things which
 could have been progressing all along...), and considering how long it
 takes to get changes merged, it seems unlikely that this will make it, but
 I've been planning all along to prioritize these reviews if that helps.

 - Brant



 On Fri, Feb 13, 2015 at 1:47 PM, Lance Bragstad lbrags...@gmail.com
 wrote:

  Hello all,





 I'm proposing the Authenticated Encryption (AE) Token specification [1] as
 an SPFE. AE tokens increases scalability of Keystone by removing token
 persistence. This provider has been discussed prior to, and at the Paris
 summit [2]. There is an implementation that is currently up for review [3],
 that was built off a POC. Based on the POC, there has been some performance
 analysis done with respect to the token formats available in Keystone
 (UUID, PKI, PKIZ, AE) [4].



 The Keystone team spent some time discussing limitations of the current
 POC implementation at the mid-cycle. One case that still needs to be
 addressed (and is currently being worked), is federated tokens. When
 requesting unscoped federated tokens, the token contains unbound groups
 which would need to be carried in the token. This case can be handled by AE
 tokens but it would be possible for an unscoped federated AE token to
 exceed an acceptable AE token length (i.e.  255 characters). Long story
 short, a federation migration could be used to ensure federated AE tokens
 never exceed a certain length.



 Feel free to leave your comments on the AE Token spec.



 Thanks!



 Lance



 [1] https://review.openstack.org/#/c/130050/

 [2] https://etherpad.openstack.org/p/kilo-keystone-authorization

 [3] https://review.openstack.org/#/c/145317/

 [4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What should openstack-specs review approval rules be ?

2015-02-13 Thread Doug Hellmann


On Fri, Feb 13, 2015, at 11:33 AM, James E. Blair wrote:
 Thierry Carrez thie...@openstack.org writes:
 
  Current Cross-Project Repo Rules
  
 ...
  * Only the TC chair may vote Workflow +1.
 
  My understanding is that currently, any TC member can Workflow+1 (which
  lead to the accidental approval of the previous spec).
 
 I think that was instachanged by Doug after the TC meeting:
 
   https://review.openstack.org/#/c/150581/
 
 So the immediate problem is abated, and we can deliberate about any
 other changes.
 
  Additionally, we could write a custom submit rule that requires at
  least 7 +1 votes in order for the change to be submittable.
 
  Our voting rules are slightly more complex than that, as you can see here:
 
  http://git.openstack.org/cgit/openstack/governance/tree/reference/charter.rst#n77
 
  The rule actually is more positive votes than negative votes, and a
  minimum of 5 positive votes. The 7 YES rule is just a shortcut: once
  we reach it, we can safely approve (otherwise we basically have to wait
  for all votes to be cast, which with asynchronous voting, and the
  difficulty to distinguish +0 from not voted yet, is not so easy to
  achieve).
 
  So unless you can encode that in a rule, I'd keep it under the
  responsibility of the chair to properly (and publicly) tally the votes
  (which I do in the +2 final comment) according to our charter rules.
 
 The mechanism for such a change is Prolog.  I suspect that encoding that
 rule is possible, though I am not familiar enough with Prolog to say for
 sure.  The part of me that loves learning new programming languages
 wants to find out.  But part of me agrees with you and thinks we should
 just leave it to the Chair.
 
 Actually, I think I may have missed a requirement that would preclude
 the use of that rule.  We may consider Chair is able to approve trivial
 administrative changes to the governance repo without a full vote as a
 requirement, in which case we want the status quo.

Yes, indeed. We also recently agreed to let the Chair rebase changes
that were approved but can't be merged to avoid needing another vote
just because git is confused.

Doug

 
 -Jim
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-13 Thread Everett Toews
On Feb 12, 2015, at 9:29 AM, Ryan Brown 
rybr...@redhat.commailto:rybr...@redhat.com wrote:

On 02/10/2015 08:01 AM, Everett Toews wrote:
On Feb 9, 2015, at 9:28 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 02/02/2015 02:51 PM, Stefano Maffulli wrote:
On Fri, 2015-01-30 at 23:05 +, Everett Toews wrote:
To converge the OpenStack APIs to a consistent and pragmatic RESTful
design by creating guidelines that the projects should follow. The
intent is not to create backwards incompatible changes in existing
APIs, but to have new APIs and future versions of existing APIs
converge.

It's looking good already. I think it would be good also to mention the
end-recipients of the consistent and pragmatic RESTful design so that
whoever reads the mission is reminded why that's important. Something
like:

   To improve developer experience converging the OpenStack API to
   a consistent and pragmatic RESTful design. The working group
   creates guidelines that all OpenStack projects should follow,
   avoids introducing backwards incompatible changes in existing
   APIs and promotes convergence of new APIs and future versions of
   existing APIs.

After reading all the mails in this thread, I've decided that Stef's
suggested mission statement above is the one I think best represents
what we're trying to do.

That said, I think it should begin To improve developer experience
*by* converging ... :)

+1

I think we could be even more explicit about the audience.

To improve developer experience *of API consumers by* converging the
OpenStack API to a consistent and pragmatic RESTful design. The working
group creates guidelines that all OpenStack projects should
follow, avoids introducing backwards incompatible changes in
existing APIs, and promotes convergence of new APIs and future versions
of existing APIs.

I’m not crazy about the term API consumer and could bike shed a bit on
it. The problem being that alternative terms for API consumer have
been taken in OpenStack land. “developer” is used for contributor
developers building OpenStack itself, “user” is used for operators
deploying OpenStack, and “end user” has too many meanings. “API
consumer” makes it clear what side of the API the working group audience
falls on.

I wouldn't mind API user, I think it conveys intent but doesn't sound
as stilted as API consumer”.

I read through the #topic mission statement” [1] of the last API WG meeting. 
There is a lot of support for Stefano’s take on the mission statement. As such 
I’ve proposed the following patch to the api-wg repo with the tweak from “API 
consumer” to “API user”.

https://review.openstack.org/#/c/155911/

We’ve had a lot of discussion on it already so I think it’s time for people to 
have their final say. Let us know what you think!

Thanks,
Everett

[1] 
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-02-12-16.00.log.html#l-17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A question about strange behavior of oslo.config in eclipse

2015-02-13 Thread Doug Hellmann


On Thu, Feb 12, 2015, at 07:19 AM, Joshua Zhang wrote:
 Hi Doug,
 
Thank you very much for your reply. I don't have any codes, so no any
 special codes as well.
Only thing I did is that:
1, use devstack to install a fresh openstack env, all are ok.
2, import neutron-vpnaas directory (no any my own codes) into eclipse
as
 pydev project, for example, run unit test
 (neutron_vpnaas.tests.unit.services.vpn.test_vpn_service ) in eclipse, it
 throws the following exception.
3, but this unit test can be run well in bash, see
 http://paste.openstack.org/show/172016/
4, this unit test can also be run well in eclipse as long as I edit
 neutron/openstack/common/policy.py file to change oslo.config into
 oslo_config.
 
 
 ==
 ERROR: test_add_nat_rule
 (neutron_vpnaas.tests.unit.services.vpn.test_vpn_service.TestVPNDeviceDriverCallsToService)
 neutron_vpnaas.tests.unit.services.vpn.test_vpn_service.TestVPNDeviceDriverCallsToService.test_add_nat_rule
 --
 _StringException: Traceback (most recent call last):
   File
 /bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/unit/services/vpn/test_vpn_service.py,
 line 98, in setUp
 super(TestVPNDeviceDriverCallsToService, self).setUp()
   File
 /bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/unit/services/vpn/test_vpn_service.py,
 line 53, in setUp
 super(VPNBaseTestCase, self).setUp()
   File /bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/base.py, line
 36, in setUp
 override_nvalues()
   File /bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/base.py, line
 30, in override_nvalues
 cfg.CONF.set_override('policy_file', neutron_policy)

Yes, this line is trying to force the value of a policy module
configuration value but the test file does not previously import the
module where that option is defined.

For now, you should investigate the fixture class in oslo.config [1],
and update the test class to use ConfigOpts.import_opt() to ensure the
policy option is defined.

After the oslo.policy library is released (and neutron is updated to use
it instead of the incubated version of that code), the tests will need
to be changed again to use an API to update the setting because
configuration options are not part of the public API for Oslo libraries.
I filed a bug against oslo.policy to track the need for that change [2].

Doug

[1] http://docs.openstack.org/developer/oslo.config/fixture.html
[2] https://bugs.launchpad.net/oslo.policy/+bug/1421869

   File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line
 1679, in __inner
 result = f(self, *args, **kwargs)
   File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line
 1949, in set_override
 opt_info = self._get_opt_info(name, group)
   File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line
 2262, in _get_opt_info
 raise NoSuchOptError(opt_name, group)
 NoSuchOptError: no such option: policy_file
 
 On Tue, Feb 10, 2015 at 10:38 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Tue, Feb 10, 2015, at 04:29 AM, Joshua Zhang wrote:
   Hi Stacker,
  A question about oslo.config, maybe a very silly question. but pls
  tell
   me if you know, thanks in advance.
  
  I know oslo has removed 'olso' namespace, oslo.config has been changed
   to oslo_config, it also retains backwards compat.
  
  I found I can run openstack successfully, but as long as I run
  something
   in eclipse/pydev it always said like 'NoSuchOptError: no such option:
   policy_file'. I can change 'oslo.config' to 'oslo_config' in
   neutron/openstack/common/policy.py temporarily to bypass this problem
   when
   I want to debug something in eclipse. But I want to know why? who can
   help
   me to explain ? thanks.
 
  It sounds like you have code in one module using an option defined
  somewhere else and relying on import ordering to cause that option to be
  defined. The import_opt() method of the ConfigOpts class is meant to
  help make these cross-module option dependencies explicit [1]. If you
  provide a more detailed traceback I may be able to give more specific
  advice about where changes are needed.
 
  Doug
 
  [1]
 
  http://docs.openstack.org/developer/oslo.config/configopts.html?highlight=import_opt#oslo_config.cfg.ConfigOpts.import_opt
 
  
  
   --
   Best Regards
   Zhang Hua(张华)
   Software Engineer | Canonical
   IRC:  zhhuabj
  
  __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 

Re: [openstack-dev] [neutron][neutron-*aas] Is lockutils-wrapper needed for tox.ini commands?

2015-02-13 Thread Ben Nemec
All it does is create a temporary lock directory and then set an env var
to that path so external locks work properly in tests.  If you don't
have any external locks or you use
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/fixture/lockutils.py#L55
for any tests that do then you don't need the wrapper.

The easiest thing to do is probably remove it and see if anything fails.
 If everything still works then it wasn't needed.

-Ben

On 02/13/2015 04:13 PM, Paul Michali wrote:
 I see that in tox.ini, several commands have lockutils-wrapper prefix on
 them in the neutron-vpnaas repo. Seems like this was added as part of
 commit 88e2d801 for Migration to oslo.concurrency.
 
 Is this needed on the functional, cover, and dsvm-functional targets? I
 don't see it in the neutron tox.ini, so just wondering if I should remove
 it (I had already on the cover target).
 
 I ask, because I'm adding a coverage target for dsvm-functional and would
 like to know if I should remove it everywhere, or add it in. I'll do the
 same in FW and LB repos too.
 
 If someone could elaborate on what this wrapper script does (if it is still
 needed) that would be great, so I'm not blindly applying things.
 
 
 PCM (Paul Michali)
 
 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][swift] Signature of return values in tempest swift client

2015-02-13 Thread Clay Gerrard
On Fri, Feb 13, 2015 at 2:15 PM, David Kranz dkr...@redhat.com wrote:

 Swift is different in that most interesting data is in the headers except
 for GET methods, and applying the same methodology as the others does not
 make sense to me. There are various ways the swift client could be changed
 to return one value, or it could be left as is.


Can you point to the relevant implementation?

FWIW, we've found in swiftclient that's it's been extremely restrictive to
return tuples and in retrospect would have preferred either a (status,
headers, body) signature (which unfortunately leaves a lot of interesting
parsing up to the client) or something more like a dictionary or
SwiftResponse that as described in the spec has properties for getting at
interesting values - and most importantly allow for future additive changes.

It sounds like you're on the right track trying to make clients return a
single value (or a dict or something) - I'm tertiarily curious with what
you come up.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-13 Thread Brant Knudson
We get a lot of complaints about problems caused by persistent tokens, so
this would be great to see in K. Given the amount of work required to get
it done, which includes taking care of some other issues, like getting
revocation events working and refactoring the token code (things which
could have been progressing all along...), and considering how long it
takes to get changes merged, it seems unlikely that this will make it, but
I've been planning all along to prioritize these reviews if that helps.

- Brant


On Fri, Feb 13, 2015 at 1:47 PM, Lance Bragstad lbrags...@gmail.com wrote:

 Hello all,


 I'm proposing the Authenticated Encryption (AE) Token specification [1] as
 an SPFE. AE tokens increases scalability of Keystone by removing token
 persistence. This provider has been discussed prior to, and at the Paris
 summit [2]. There is an implementation that is currently up for review [3],
 that was built off a POC. Based on the POC, there has been some performance
 analysis done with respect to the token formats available in Keystone
 (UUID, PKI, PKIZ, AE) [4].

 The Keystone team spent some time discussing limitations of the current
 POC implementation at the mid-cycle. One case that still needs to be
 addressed (and is currently being worked), is federated tokens. When
 requesting unscoped federated tokens, the token contains unbound groups
 which would need to be carried in the token. This case can be handled by AE
 tokens but it would be possible for an unscoped federated AE token to
 exceed an acceptable AE token length (i.e.  255 characters). Long story
 short, a federation migration could be used to ensure federated AE tokens
 never exceed a certain length.

 Feel free to leave your comments on the AE Token spec.

 Thanks!

 Lance

 [1] https://review.openstack.org/#/c/130050/
 [2] https://etherpad.openstack.org/p/kilo-keystone-authorization
 [3] https://review.openstack.org/#/c/145317/
 [4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][neutron-*aas] Is lockutils-wrapper needed for tox.ini commands?

2015-02-13 Thread Paul Michali
I see that in tox.ini, several commands have lockutils-wrapper prefix on
them in the neutron-vpnaas repo. Seems like this was added as part of
commit 88e2d801 for Migration to oslo.concurrency.

Is this needed on the functional, cover, and dsvm-functional targets? I
don't see it in the neutron tox.ini, so just wondering if I should remove
it (I had already on the cover target).

I ask, because I'm adding a coverage target for dsvm-functional and would
like to know if I should remove it everywhere, or add it in. I'll do the
same in FW and LB repos too.

If someone could elaborate on what this wrapper script does (if it is still
needed) that would be great, so I'm not blindly applying things.


PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests

2015-02-13 Thread Robert Collins
What's the test path thing for? Testr should be able to filter out unit
tests or vice versa without altering discovery.
On 14 Feb 2015 08:57, Joe Gordon joe.gord...@gmail.com wrote:


1.
   A few months back we started the process to remove the tempest CLI
   tests from tempest [0]. Now that we have successfully pulled novaclient 
 CLI
   tests out of tempest, we have the process sorted out. We now have a 
 process
   that should be easy to follow for each project, in fact keystoneclient 
 has
   already begun as well [1].  As stated in [0], the goal is to completely
   remove CLI tests from tempest by the end of the cycle.


   [0]
   
 http://lists.openstack.org/pipermail/openstack-dev/2014-October/048089.html
[1] https://review.openstack.org/#/c/155543/


   *Steps*


- Move unit tests from */tests/ to */tests/unit
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=3561772f8b0cfee746af53fa228375b2ec7dfd9d
- Add OS_TEST_PATH to testr.conf
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=f197c64e05596fc59c8318813d4f69a88ac832fc
- Copy over initial set of CLI tests from tempest/cli/ and add
functional test tox endpoint. Use standard OpenStack environment variables
to get keystone auth, so the tests can be run via 'source openrc  tox
-efunctional'
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=b89da9be28172319a16bece42f068e2d7f359c67
   - At this point you should be able to run the tests against a cloud
- Add client-dsvm-functional job definition using a post_test_hook
   -
   
 http://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=c4093cd6d328a87ea9a2335ac2dd4d09a598bc8e
- Add post_test_hook for functional tests in the client repo.
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=d11f960c58c523da7154b3311d6b37ec715392af
   - This patch can be tested out using the non-voting experimental
   job, just leave the comment 'check experimental'
- Make *client-dsvm-functional job gating for client
   -
   
 http://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=147f20f5003cfa4f15a372f7d16493c3bb40775b
   - At this point you should have a working gating functional test
   with a few tests.
- Copy in the rest of the tempest CLI tests
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=27cd393028a103d8d52cf25f035e3a2985572ccb
   - Unlike the first set of tests that were copied this is self
   gating.
- Remove tempest CLI tests for your client
   -
   
 http://git.openstack.org/cgit/openstack/tempest/commit/?id=0bd0adecd13e1285d0e938065280816395dbb415


1.
2.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >