Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Thierry Carrez
Joe Gordon wrote:
 Having rootwrap on by default makes nova-network scale very poorly by
 default.  Which doesn't sound like a good default, but not sure if no
 rootwrap is a better default.

If it boils down to that choice, by default I would pick security over
performance.

 It will require a passwordless blanket sudo access for the nova user.
 
 Can't we go back to having a sudoers file white listing which binaries
 it can call, like before?

It was a bit of a maintenance nightmare (the file was maintained in
every distribution rather than centrally in openstack). Another issue
was that we shipped the same sudoers for every combination of nodes,
allowing for example nova-api to run stuff as root it should never be
allowed to run. See [1] for the limitations of using sudo which
triggered another solution in the first place.

[1]
https://fnords.wordpress.com/2011/11/23/improving-nova-privilege-escalation-model-part-1/

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Robert Collins
On 2 August 2013 20:05, Thierry Carrez thie...@openstack.org wrote:

 It was a bit of a maintenance nightmare (the file was maintained in
 every distribution rather than centrally in openstack). Another issue
 was that we shipped the same sudoers for every combination of nodes,
 allowing for example nova-api to run stuff as root it should never be
 allowed to run. See [1] for the limitations of using sudo which
 triggered another solution in the first place.

There's still nothing other than handwaving suggesting that a domain
specific solution is needed. setuid binaries *should* be rare, and
sudo's goal : policy driven sudo access - is totally compatible with
all our needs.

So I propose we do the following:
 - switch back to sudo except for commands where we are not willing to
accept the security implications - case by case basis.
 - discuss with sudo upstream how to encode the business rules we need
in sudo [if a sudo gate is capable of doing them - not everything will
be like that]

I appreciate that 'a better solution is needed', but the one we came
up with has nothing fundamentally better than sudo, other than
'written in python' and 'accepts custom plugins but we aren't using
that yet' : I claim YAGNI.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Requirements

2013-08-02 Thread Yijing Zhang
Hello,

I'd like you to do a OpenStack Requirements code review. The reason I send
this request is another patch of mine is depends on whether this one patch
get approved.

Please visit https://review.openstack.org/#/c/38429/


Regards and best wishes,

Yijing Zhang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Thierry Carrez
Robert Collins wrote:
 On 2 August 2013 20:05, Thierry Carrez thie...@openstack.org wrote:
 
 It was a bit of a maintenance nightmare (the file was maintained in
 every distribution rather than centrally in openstack). Another issue
 was that we shipped the same sudoers for every combination of nodes,
 allowing for example nova-api to run stuff as root it should never be
 allowed to run. See [1] for the limitations of using sudo which
 triggered another solution in the first place.
 
 There's still nothing other than handwaving suggesting that a domain
 specific solution is needed. setuid binaries *should* be rare, and
 sudo's goal : policy driven sudo access - is totally compatible with
 all our needs.

Are you familiar with rootwrap ? It is not setuid itself, and it's built
*on top of sudo* and acts as an additional filter. It's not really a
domain-specific solution, it just implements the missing functionality
in sudo in a way that lets us avoid distribution and testing nightmares.

One key benefit of rootwrap is that it lets us work with a defined and
simple sudoers line, and then ship the rules within the Python code. Any
solution where you need to modify sudoers every time the code changes is
painful, because there is only one sudo configuration on a machine and
it's owned by root. We used to use pure sudo, you know. The end result
was that the sudoers file were not maintained and everyone ran and
tested with a convenient blanket-permission sudoers file. Missing rules
were discovered at packaging money time, or after release. Those were
the cactus days, and this is the nightmare rootwrap was designed to
solve. It does more than just slowing down command execution.

Additional filtering was a request from the Linux distros, since they
didn't like the lack of precise filtering in our pure-sudo original design.

 So I propose we do the following:
  - switch back to sudo except for commands where we are not willing to
 accept the security implications - case by case basis.

Could you expand on how you would 'switch back to sudo' ? The devil is
unfortunately in the details. Would you ship a sudoers file within the
code and get that somehow picked up by devstack and distros ? What would
that sudoer file contain exactly ? Rules for all nodes ? just nova-network ?

  - discuss with sudo upstream how to encode the business rules we need
 in sudo [if a sudo gate is capable of doing them - not everything will
 be like that]
 
 I appreciate that 'a better solution is needed', but the one we came
 up with has nothing fundamentally better than sudo, other than
 'written in python' and 'accepts custom plugins but we aren't using
 that yet' : I claim YAGNI.

The custom plugins are already in use (see KillFilter, ReadFileFilter,
EnvFilter...). And again, rootwrap is not reinventing sudo. It just
applies additional filtering in a convenient and distro-friendly way.

Yes, it was clearly not designed for performance, and so it fails in
corner cases (the soon to be deprecated nova-network). I still think we
can address those corner cases by grouping calls, rather than throwing
the baby out with the bath water.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Requirements

2013-08-02 Thread Noorul Islam K M
Yijing Zhang traceyzh...@siu.edu writes:

 Hello,

 I'd like you to do a OpenStack Requirements code review. The reason I send
 this request is another patch of mine is depends on whether this one patch
 get approved.

 Please visit https://review.openstack.org/#/c/38429/



I am new here. Is it required to inform mailing list for reviewing. I
thought Gerrit takes care of that.

Regards,
Noorul
 Regards and best wishes,

 Yijing Zhang
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Mark McLoughlin
On Thu, 2013-07-25 at 14:40 -0600, Mike Wilson wrote:
 In my opinion:
 
 1. Stop using rootwrap completely and get strong argument checking support
 into sudo (regex).
 2. Some sort of long lived rootwrap process, either forked by the service
 that want's to shell out or a general purpose rootwrapd type thing.
 
 I prefer #1 because it's surprising that sudo doesn't do this type of thing
 already. It _must_ be something that everyone wants. But #2 may be quicker
 and easier to implement, my $.02.

IMHO, #1 set the discussion off in a poor direction.

Who exactly is stepping up to do this work in sudo? Unless there's
someone with a even prototype patch in hand, any insistence that we base
our solution on this hypothetical feature is an unhelpful diversion.

And even if this work was done, it will be a long time before it's in
all the distros we support, so improving rootwrap or finding an
alternate solution will still be an important discussion.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Requirements

2013-08-02 Thread Sean Dague

On 08/02/2013 05:18 AM, Noorul Islam K M wrote:

Yijing Zhang traceyzh...@siu.edu writes:


Hello,

I'd like you to do a OpenStack Requirements code review. The reason I send
this request is another patch of mine is depends on whether this one patch
get approved.

Please visit https://review.openstack.org/#/c/38429/




I am new here. Is it required to inform mailing list for reviewing. I
thought Gerrit takes care of that.


Gerrit takes care of it, there typically isn't a need to bring reviews here.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Alarming should be outside of Ceilometer as a separate package.

2013-08-02 Thread Doug Hellmann
On Thu, Aug 1, 2013 at 8:52 PM, Sandy Walsh sandy.wa...@rackspace.comwrote:



 On 08/01/2013 07:22 PM, Doug Hellmann wrote:
 
 
 
  On Thu, Aug 1, 2013 at 10:31 AM, Sandy Walsh sandy.wa...@rackspace.com
  mailto:sandy.wa...@rackspace.com wrote:
 
  Hey y'all,
 
  I've had a little thorn in my claw on this topic for a while and
 thought
  I'd ask the larger group.
 
  I applaud the efforts of the people working on the alarming
 additions to
  Ceilometer, but I've got concerns that we're packaging things the
  wrong way.
 
  I fear we're making another Zenoss/Nagios with Ceilometer. It's
 trying
  to do too much.
 
  The current trend in the monitoring work (#monitoringlove) is to
 build
  your own stack from a series of components. These components take in
  inputs, process them and spit out outputs.
  Collectors/Transformers/Publishers. This is the model CM is built on.
 
  Making an all-singing-all-dancing monolithic monitoring package is
 the
  old way of building these tools. People want to use best-of-breed
  components for their monitoring stack. I'd like to be able to use
  reimann.io http://reimann.io for my stream manager, diamond for my
  collector, logstash for
  my parser, etc. Alarming should just be another consumer.
 
  CM should do one thing well. Collect data from openstack, store and
  process them, and make them available to other systems via the API or
  publishers. That's all. It should understand these events better than
  any other product out there. It should be able to produce meaningful
  metrics/meters from these events.
 
  Alarming should be yet another consumer of the data CM produces. Done
  right, the If-This-Then-That nature of the alarming tool could be
  re-used by the orchestration team or perhaps even scheduling.
  Intertwining it with CM is making the whole thing too complex and
 rigid
  (imho).
 
  CM should be focused on extending our publishers and input plug-ins.
 
  I'd like to propose that alarming becomes its own project outside of
  Ceilometer. Or, at the very least, its own package, external of the
  Ceilometer code base. Perhaps it still lives under the CM moniker,
 but
  packaging-wise, I think it should live as a separate code base.
 
 
  It is currently implemented as a pair of daemons (one to monitor the
  alarm state, another to send the notifications). Both daemons use a
  ceilometer client to talk to the REST API to consume the sample data or
  get the alarm details, as required. It looks like alarms are triggered
  by sending RPC cast message, and that those in turn trigger the webhook
  invocation. That seems pretty loosely coupled, as far as the runtime
  goes. Granted, it's still in the main ceilometer code tree, but that
  doesn't say anything about how the distros will package it.
 
  I'll admit I haven't been closely involved in the development of this
  feature, so maybe my quick review of the code missed something that is
  bringing on this sentiment?

 No, you hit the nail on the head. It's nothing with the implementation,
 it's purely with the packaging and having it co-exist within ceilometer.
 Since it has its own services, uses Oslo, the CM client and operates via
 the public API, it should be able to live outside the main CM codebase.
 My concern is that it has a different mandate than CM (or the CM mandate
 is too broad).

 What really brought it on for me was doing code reviews for CM and
 hitting all this alarm stuff and thinking this is mental context switch
 from what CM does, it really doesn't belong here. (though I'm happy to
 help out with the reviews)


OK. I think we went back and forth for a while on where to put it, but at
the time I think we really only discussed Heat and Ceilometer as options.
(Eoghan or Angus, please correct me if I'm remembering those discussions
incorrectly.) Now that we have programs instead of projects maybe it's
more clear that a separate repository would be possible, although from a
pragmatic standpoint leaving it where it is for the rest of this dev cycle
seems reasonable.

Perhaps this is something to discuss at the summit?

Doug



 -S


 
  Doug
 
 
 
  Please, change my view. :)
 
  -S
 
  Side note: I might even argue that the statistical features of CM are
  going a little too far. My only counter-argument is that statistics
 are
  the only way to prevent us from sending large amounts of data over
 the
  wire for post-processing. But that's a separate discussion.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [Ceilometer] Ceilometer and nova compute cells

2013-08-02 Thread Doug Hellmann
On Thu, Aug 1, 2013 at 7:36 PM, Sam Morrison sorri...@gmail.com wrote:


 On 31/07/2013, at 6:45 PM, Julien Danjou jul...@danjou.info wrote:

  On Wed, Jul 31 2013, Sam Morrison wrote:
 
  Hi Sam,
 
  Does everything that gets stored in the datastore go through the
  ceilometer.collector.metering queue?
 
  If you only use the RPC publisher (which is the default), yes.
 
  If so maybe the collector could instead of storing the message forward
 these
  messages to another rabbit server where another collector is running
 who in
  turn either forwards it up or stores it in the datastore (or both maybe)
  I think the confusing thing to me is how notification messages get into
  ceilometer. The collector does 2 things I think? And it seems to leave
 the
  messages on the notification.info queue to build up?
 
  Yes, the collector has 2 purposes right now:
  1. It listens for notifications on the RPC bus and converts them into
Samples;
  2. It publishes these Samples according to your pipeline.yaml file to
different conduct, the default being a RPC call on a collector for
storing these samples.

 Thanks for the info, looking at the code I think the way to do this would
 be to create a new dispatcher that instead of recording the metering data
 just forwards it on to another RPC server for another collector to consume.
 Do you think that's the way to go?


I'm not certain any new code needs to be written. Couldn't we configure the
pipeline in the cell to send the data directly upstream to the central
collector, instead of having it pass through a collector in the cell?

Doug



 Cheers,
 Sam



 
  --
  Julien Danjou
  ;; Free Software hacker ; freelance consultant
  ;; http://julien.danjou.info


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Ceilometer and nova compute cells

2013-08-02 Thread Julien Danjou
On Fri, Aug 02 2013, Doug Hellmann wrote:

 I'm not certain any new code needs to be written. Couldn't we configure the
 pipeline in the cell to send the data directly upstream to the central
 collector, instead of having it pass through a collector in the cell?

That would need the RPC layer to connect to different rabbitmq server.
Not sure that's supported yet.

-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] OVS Agent and OF bridges

2013-08-02 Thread Addepalli Srini-B22160
Hi,

As I understand,  current OVS Quantum agent is assuming that there are two 
Openflow bridges (br-int and br-tun).

br_tun, I think, is introduced to take care of overlay tunnels.

With flow based tunnel selection and tunnel parameters definition,  I think 
br-tun is no longer required.

Removal of br-tun increases the performance as one less OF bridge in the packet 
path.

Is there any interest in community to remove br-tun?  If so, we can work on it 
and provide a blue print and the code.

Thanks
Srini




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Ceilometer and nova compute cells

2013-08-02 Thread Doug Hellmann
On Fri, Aug 2, 2013 at 7:47 AM, Julien Danjou jul...@danjou.info wrote:

 On Fri, Aug 02 2013, Doug Hellmann wrote:

  I'm not certain any new code needs to be written. Couldn't we configure
 the
  pipeline in the cell to send the data directly upstream to the central
  collector, instead of having it pass through a collector in the cell?

 That would need the RPC layer to connect to different rabbitmq server.
 Not sure that's supported yet.


We'll have that problem in the cell's collector, then, too, right?



 --
 Julien Danjou
 # Free Software hacker # freelance consultant
 # http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to apply submit Nova v3 API tempest tests

2013-08-02 Thread David Kranz

On 08/02/2013 01:23 AM, Christopher Yeoh wrote:

Hi,

Matthew Trenish brought up an issue on one of the proposed Nova V3 API 
tempest tests:


 So I get why you do things this way. But, unlike nova we aren't 
going to be able to do part1
 being a straight copy and paste. Doing so will double the amount of 
times we run these tests in
 the gate once this gets merged. I think it would be best to go about 
this as smaller patches in a
 longer series, just adding the v3 tests, instead of copying and 
pasting. But, that has it's own
 tradeoffs too. I think you should probably bring this up on the 
mailing list to get other opinions
 on how to go about adding the v3 tests. I feel that this needs a 
wider discussion.


I think that the part 1/part 2 approach is the best way to go for the 
tempest tests. For the Nova changes we had a policy of reviewing, but 
not approving the part 1 patch until the part 2 patch had been 
approved. So there is no extra gate load until its ready for v3.
Chris, I think there is some background missing on exactly what this 
part 1/part 2 approach is. I can imagine some things but it would be 
better if you just gave a brief description of what you did in the nova 
code.


I think its much easier to review when you only need to look at the v2 
vs v3 changes rather than see it as brand new code. Looking at it as 
brand new code does have an advantage in that sometimes bugs are 
picked up in the old code, but its a *lot* more effort for the 
reviewers even if the patch is split up into many parts.
Yes it is, especially when many of the reviewers will never have used 
the v3 api or know a lot about it. And I think the current v2 nova 
tempest tests are about 7500 lines of code right now.


As for the extra gate load, the extra v3 tests are an issue longer 
term but I think we're hoping the parallelisation work is going to 
save us :-)
Well it will have to. As long as trunk is supporting v2 and v3 we have 
to run both I think.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS Agent and OF bridges

2013-08-02 Thread Kyle Mestery (kmestery)
On Aug 2, 2013, at 7:02 AM, Addepalli Srini-B22160 b22...@freescale.com wrote:
 
 Hi,
  
 As I understand,  current OVS Quantum agent is assuming that there are two 
 Openflow bridges (br-int and br-tun).
  
 “br_tun”, I think, is introduced to take care of overlay tunnels.
  
 With flow based tunnel selection and tunnel parameters definition,  I think 
 br-tun is no longer required.
  
 Removal of br-tun increases the performance as one less OF bridge in the 
 packet path.
  
 Is there any interest in community to remove br-tun?  If so, we can work on 
 it and provide a blue print and the code.
  
 Thanks
 Srini
  
Hi Srini:

I am very interested in this, and I think this would be a nice performance 
improvement. We'd have to maintain backwards compatibility with the existing 
code, since only version of OVS = 1.10 have flow-based tunneling code. If you 
want, I'd be happy to review your BP and code for this, or even work with you 
on this.

Thanks,
Kyle

  
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Chris Jones
Hi

On 2 August 2013 13:14, Daniel P. Berrange berra...@redhat.com wrote:

 for managing VMs. Nova isn't using as much as it could do though. Nova
 isn't using any of libvirt's storage or network related APIs currently,
 which could obsolete some of its uses of rootwrap.


That certainly sounds like a useful thing that could happen regardless of
what happens with sudo/rootwrap.


   * DBus isn't super pleasing to work with as a developer or a sysadmin
 No worse than OpenStack's own RPC system


Replacing a thing we don't really like, with another thing that isn't super
awesome, may not be a good move :)


 As a host-local service only, I'm not sure high availability is really
 a relevant issue.


So, I mentioned that because exec()ing out to a shell script way fewer ways
it can go wrong. exec() doesn't go away and forget the thing you just asked
for, because something is being upgraded, or restarted, or crashed. I'm a
little rusty with DBus, but I don't think those sorts of things are well
catered for. Perhaps we don't care about that, but the change would be big
enough to at least figure out whether we care.


 but I still think it is better to have your root/non-root barrier defined
 in terms of APIs. It is much simpler to validate well defined parameters
 to API calls, than to parse  validate shell command lines. Shell commands
 have a nasty habit of changing over time, or being inconsistent across
 distros, or have ill defined error handling / reporting behaviour.


I do agree that privileged operations should be well defined and separated,
but I think in almost all cases you're going to find that the shell
commands are just moving to a different bit of code and will still be
fragile, just fragile inside a privileged daemon instead of fragile inside
a shell script. To pick a random example, nova's calls to iptables.
Something, somewhere is still going to be composing an iptables command out
of fragments and executing /sbin/iptables and hoping for a detailed answer.

Regardless of the mechanism used to implement this, I think that from the
perspective of someone hacking on the code that needs to make a privileged
call, and the code that implements that privileged call, the mechanism for
the call should be utterly transparent, as in, you are just calling a
method with some arguments in one place, and implementing that method and
returning something, in another place. That could be implemented on top of
sudo, root wrap, dbus, AMQP, SOAP, etc, etc.

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Alarming should be outside of Ceilometer as a separate package.

2013-08-02 Thread Sandy Walsh


On 08/02/2013 08:38 AM, Doug Hellmann wrote:
 
 
 
 On Thu, Aug 1, 2013 at 8:52 PM, Sandy Walsh sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com wrote:
 
 
 
 On 08/01/2013 07:22 PM, Doug Hellmann wrote:
 
 
 
  On Thu, Aug 1, 2013 at 10:31 AM, Sandy Walsh
 sandy.wa...@rackspace.com mailto:sandy.wa...@rackspace.com
  mailto:sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com wrote:
 
  Hey y'all,
 
  I've had a little thorn in my claw on this topic for a while
 and thought
  I'd ask the larger group.
 
  I applaud the efforts of the people working on the alarming
 additions to
  Ceilometer, but I've got concerns that we're packaging things the
  wrong way.
 
  I fear we're making another Zenoss/Nagios with Ceilometer.
 It's trying
  to do too much.
 
  The current trend in the monitoring work (#monitoringlove) is
 to build
  your own stack from a series of components. These components
 take in
  inputs, process them and spit out outputs.
  Collectors/Transformers/Publishers. This is the model CM is
 built on.
 
  Making an all-singing-all-dancing monolithic monitoring
 package is the
  old way of building these tools. People want to use best-of-breed
  components for their monitoring stack. I'd like to be able to use
  reimann.io http://reimann.io http://reimann.io for my
 stream manager, diamond for my
  collector, logstash for
  my parser, etc. Alarming should just be another consumer.
 
  CM should do one thing well. Collect data from openstack,
 store and
  process them, and make them available to other systems via the
 API or
  publishers. That's all. It should understand these events
 better than
  any other product out there. It should be able to produce
 meaningful
  metrics/meters from these events.
 
  Alarming should be yet another consumer of the data CM
 produces. Done
  right, the If-This-Then-That nature of the alarming tool could be
  re-used by the orchestration team or perhaps even scheduling.
  Intertwining it with CM is making the whole thing too complex
 and rigid
  (imho).
 
  CM should be focused on extending our publishers and input
 plug-ins.
 
  I'd like to propose that alarming becomes its own project
 outside of
  Ceilometer. Or, at the very least, its own package, external
 of the
  Ceilometer code base. Perhaps it still lives under the CM
 moniker, but
  packaging-wise, I think it should live as a separate code base.
 
 
  It is currently implemented as a pair of daemons (one to monitor the
  alarm state, another to send the notifications). Both daemons use a
  ceilometer client to talk to the REST API to consume the sample
 data or
  get the alarm details, as required. It looks like alarms are triggered
  by sending RPC cast message, and that those in turn trigger the
 webhook
  invocation. That seems pretty loosely coupled, as far as the runtime
  goes. Granted, it's still in the main ceilometer code tree, but that
  doesn't say anything about how the distros will package it.
 
  I'll admit I haven't been closely involved in the development of this
  feature, so maybe my quick review of the code missed something that is
  bringing on this sentiment?
 
 No, you hit the nail on the head. It's nothing with the implementation,
 it's purely with the packaging and having it co-exist within ceilometer.
 Since it has its own services, uses Oslo, the CM client and operates via
 the public API, it should be able to live outside the main CM codebase.
 My concern is that it has a different mandate than CM (or the CM mandate
 is too broad).
 
 What really brought it on for me was doing code reviews for CM and
 hitting all this alarm stuff and thinking this is mental context switch
 from what CM does, it really doesn't belong here. (though I'm happy to
 help out with the reviews)
 
 
 OK. I think we went back and forth for a while on where to put it, but
 at the time I think we really only discussed Heat and Ceilometer as
 options. (Eoghan or Angus, please correct me if I'm remembering those
 discussions incorrectly.) Now that we have programs instead of
 projects maybe it's more clear that a separate repository would be
 possible, although from a pragmatic standpoint leaving it where it is
 for the rest of this dev cycle seems reasonable.
 
 Perhaps this is something to discuss at the summit?

Absolutely, no urgency here. I just wanted to get it in the ML to open
the conversation to a wider audience and have something to refer back to.

 Doug
  
 
 
 -S
 
 
   

Re: [openstack-dev] [Glance] images tasks API -- final call for comments

2013-08-02 Thread Brian Rosmaita
Hi Paul,

There wasn't a follow up on the mailing list (actually, I guess this is it!).  
Basically, we discussed Jay's points in the glance meetings and on irc, and 
decided to stick with this approach.  I think the final exchange in that thread 
sums it up, he understands why we're proposing to do it this way, but he 
disagrees.  We certainly respect his opinion, and appreciate the extra scrutiny 
on the proposal, but when it comes down to it, I think there are more 
advantages than disadvantages to introducing a task resource.

Briefly,

1. It's better to introduce a new resource that's specifically designed to 
track task states and deliver clear and specific error messages than to mess 
with the image resource to get these things onto it.

2. The export task in particular would require a new resource anyway.  I 
guess you could just enhance the image response and use that, but then you'd 
have two sets of states on it, the state of the image itself and the state of 
the export task, and that just seems confusing.  So if we're going to introduce 
something new for the export task, it's worth considering whether the new 
resource would be useful for other similar operations.

3. Having a task resource can allow cancelling the task by deleting it.  
Otherwise, you're in the position of deleting an image that doesn't really 
exist yet (in the case of import and clone).  Again, you could do this without 
a new resource, but it just feels weird to me.

4. Jay's nova analogy: an instance in error state is still an instance, so an 
image that fails to import or clone correctly is still an image.  I'll pull a 
lawyer move here and say that (a) i don't think it's a good analogy, and (b) 
even if it is a good analogy, it's not a good argument.  So for (a), with 
instances, the provider has to get the instance provisioned, it's taking up 
resources on the host, etc.  With an image import, we don't really have to tie 
up similar resources before we know that the import or clone is going to 
succeed.  So while a broken instance is still an instance, we don't have to 
allow broken images to be images.  But also (this is (b)), customers get their 
knickers in a twist over broken instances, they don't like them, it's not a 
good experience to create an instance and have it go to error state and have to 
be deleted.  Rather than give them the same experience with an image 
import/clone, i.e., wait for the image to go to error state and then have to 
delete it, we can let the task go to error state and give them a good error 
message, and they don't have the frustration of dealing with a bad image.  Of 
course, they have the frustration of dealing with a failed task, but I think 
that's a big difference.

5. Following up on that, I like the separation of tasks and images with respect 
to a user listing what they have in their account.  You could do a similar 
thing by clever filtering of an image-list, but I don't see the point.  We've 
got tasks that may result in images (or in successful export of an image), and 
we've got images that you can use to boot instances.  I know this is a matter 
of opinion, but it just seems to me that tasks and images are clearly 
conceptually different, and having an image resource *and* a task resource 
reflects this in a non-confusing way.

Anyway, this has gotten long enough.  Thanks for kicking off the new discussion!

cheers,
brian


From: Paul Bourke [pauldbou...@gmail.com]
Sent: Thursday, August 01, 2013 11:21 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Glance] images tasks API -- final call for 
comments

Hi Brian,

I had a read over the summary wiki page and the individual docs.

My main thought was that although the asynchronous nature seems attractive, the 
problems the new API is setting out to solve could be adapted to the existing 
images API.  This view seems to be shared and well highlighted by Jay in this 
thread: http://lists.openstack.org/pipermail/openstack-dev/2013-May/009527.html

Was there any follow up to that mail? (sorry if I missed it!)

Thanks,
-Paul


On 30 July 2013 19:36, Brian Rosmaita 
brian.rosma...@rackspace.commailto:brian.rosma...@rackspace.com wrote:
After lots of discussion, here are the image import/export/cloning blueprints 
unified as a tasks API.  Please reply with comments!

Take a look at this doc first:
https://wiki.openstack.org/wiki/Glance-tasks-api
It's got a related documents section with links to the previous proposals and 
also to the earlier email list discussion.  It's also got links to the details 
for the import, export, and cloning tasks, which I guess I might as well paste 
here so you have 'em handy:
https://wiki.openstack.org/wiki/Glance-tasks-import
https://wiki.openstack.org/wiki/Glance-tasks-export
https://wiki.openstack.org/wiki/Glance-tasks-clone

Finally, here's a link to an experimental product page for this feature:

[openstack-dev] [Ceilometer] Looking for some help understanding default meters

2013-08-02 Thread Thomas Maddox
Hey all,

I've been poking around to get an understanding of what some of these default 
meters mean in the course of researching this Glance bug 
(https://bugs.launchpad.net/ceilometer/+bug/1201701). I was wondering if anyone 
could explain to me what the instance meter is. The unit 'instance' sort of 
confuses me when each one of these meters is tied to a single resource 
(instance), especially because it looks like a count of all notifications 
regarding a particular instance that hit the bus. Here's some output for one of 
the instances I spun up: http://paste.openstack.org/show/42963/. Another 
concern I have is I think I may have found another bug, because I can delete 
the instance shown in this paste, and it still has a resource state description 
of 'scheduling' long after it's been deleted: 
http://paste.openstack.org/show/42962/, much like the Glance issue I'm 
currently working on.

I'm very new to this, so feel free to be verbose. =]

Thanks for your time!

-Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Russell Bryant
On 08/02/2013 07:52 AM, Thierry Carrez wrote:
 Daniel P. Berrange wrote:
 On Fri, Aug 02, 2013 at 10:58:11AM +0100, Mark McLoughlin wrote:
 On Thu, 2013-07-25 at 14:40 -0600, Mike Wilson wrote:
 In my opinion:

 1. Stop using rootwrap completely and get strong argument checking support
 into sudo (regex).
 2. Some sort of long lived rootwrap process, either forked by the service
 that want's to shell out or a general purpose rootwrapd type thing.

 I prefer #1 because it's surprising that sudo doesn't do this type of thing
 already. It _must_ be something that everyone wants. But #2 may be quicker
 and easier to implement, my $.02.

 IMHO, #1 set the discussion off in a poor direction.

 Who exactly is stepping up to do this work in sudo? Unless there's
 someone with a even prototype patch in hand, any insistence that we base
 our solution on this hypothetical feature is an unhelpful diversion.

 And even if this work was done, it will be a long time before it's in
 all the distros we support, so improving rootwrap or finding an
 alternate solution will still be an important discussion.

 Personally I'm of the opinion that from an architectural POV, use of
 either rootwrap or sudo is a bad solution, so arguing about which is
 better is really missing the bigger picture. In Linux, there has been
 a move away from use of sudo or similar approaches, towards the idea
 of having privileged separated services. So if you wanted todo stuff
 related to storage, you'd have some small daemon running privilegd,
 which exposed APIs over DBus, which the non-privileged thing would
 call to make storage changes. Operations exposed by the service would
 have access control configured via something like PolicyKit, and/or
 SELinux/AppArmour.

 Of course this is alot more work than just hacking up some scripts
 using sudo or rootwrap. That's the price you pay for properly
 engineering formal APIs todo jobs instead of punting to random
 shell scripts.
 
 And for the record, I would be supportive of any proper effort to
 implement privileged calls using a (hopefully minimal) privileged
 daemon, especially for nodes that make heavy usage of privileged calls.
 I just don't feel that going back to sudo (or claiming you can just
 introduce all rootwrap features in sudo) is the proper way to fix the
 problem.
 

Cool, this seems like a good approach to me, as well.  Of course, we're
back to is anyone up for the task?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Shawn Hartsock
I would like to do this because it will let me grind out details I need to 
cover for other tasks, but I'm in danger of over committing myself. How fast do 
you want it done? ... because that is a big job ...
# Shawn Hartsock 

Russell Bryant rbry...@redhat.com wrote:

On 08/02/2013 07:52 AM, Thierry Carrez wrote:
 Daniel P. Berrange wrote:
 On Fri, Aug 02, 2013 at 10:58:11AM +0100, Mark McLoughlin wrote:
 On Thu, 2013-07-25 at 14:40 -0600, Mike Wilson wrote:
 In my opinion:

 1. Stop using rootwrap completely and get strong argument checking support
 into sudo (regex).
 2. Some sort of long lived rootwrap process, either forked by the service
 that want's to shell out or a general purpose rootwrapd type thing.

 I prefer #1 because it's surprising that sudo doesn't do this type of thing
 already. It _must_ be something that everyone wants. But #2 may be quicker
 and easier to implement, my $.02.

 IMHO, #1 set the discussion off in a poor direction.

 Who exactly is stepping up to do this work in sudo? Unless there's
 someone with a even prototype patch in hand, any insistence that we base
 our solution on this hypothetical feature is an unhelpful diversion.

 And even if this work was done, it will be a long time before it's in
 all the distros we support, so improving rootwrap or finding an
 alternate solution will still be an important discussion.

 Personally I'm of the opinion that from an architectural POV, use of
 either rootwrap or sudo is a bad solution, so arguing about which is
 better is really missing the bigger picture. In Linux, there has been
 a move away from use of sudo or similar approaches, towards the idea
 of having privileged separated services. So if you wanted todo stuff
 related to storage, you'd have some small daemon running privilegd,
 which exposed APIs over DBus, which the non-privileged thing would
 call to make storage changes. Operations exposed by the service would
 have access control configured via something like PolicyKit, and/or
 SELinux/AppArmour.

 Of course this is alot more work than just hacking up some scripts
 using sudo or rootwrap. That's the price you pay for properly
 engineering formal APIs todo jobs instead of punting to random
 shell scripts.
 
 And for the record, I would be supportive of any proper effort to
 implement privileged calls using a (hopefully minimal) privileged
 daemon, especially for nodes that make heavy usage of privileged calls.
 I just don't feel that going back to sudo (or claiming you can just
 introduce all rootwrap features in sudo) is the proper way to fix the
 problem.
 

Cool, this seems like a good approach to me, as well.  Of course, we're
back to is anyone up for the task?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to apply submit Nova v3 API tempest tests

2013-08-02 Thread Christopher Yeoh
On Fri, 02 Aug 2013 09:29:48 -0400
David Kranz dkr...@redhat.com wrote:
 On 08/02/2013 01:23 AM, Christopher Yeoh wrote:
  times we run these tests in
   the gate once this gets merged. I think it would be best to go
   about 
  this as smaller patches in a
   longer series, just adding the v3 tests, instead of copying and 
  pasting. But, that has it's own
   tradeoffs too. I think you should probably bring this up on the 
  mailing list to get other opinions
   on how to go about adding the v3 tests. I feel that this needs a 
  wider discussion.
 
  I think that the part 1/part 2 approach is the best way to go for
  the tempest tests. For the Nova changes we had a policy of
  reviewing, but not approving the part 1 patch until the part 2
  patch had been approved. So there is no extra gate load until its
  ready for v3.
 Chris, I think there is some background missing on exactly what this 
 part 1/part 2 approach is. I can imagine some things but it would be 
 better if you just gave a brief description of what you did in the
 nova code.

Sorry I should have explained better. The part1/part2 approach is to
split porting some code from v2 to v3 into two separate changesets:

- the first changeset only involves copying entire files from a v2 to
  v3 directory

- in the context of tempest the second changeset makes the required
  modifications so the test run will run correctly against the v3
  rather than the v2 api. There often will be lots of minor changes
  involved such as handling attribute renaming, different return codes,
  actions moving to methods etc.

The first changeset should only be approved when the second one is.

  As for the extra gate load, the extra v3 tests are an issue longer 
  term but I think we're hoping the parallelisation work is going to 
  save us :-)
 Well it will have to. As long as trunk is supporting v2 and v3 we
 have to run both I think.

I think one fallback alternative would be to have a VM in the gate
which tests against the v2 api and one which tests against the v3 api.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-02 Thread Patrick Petit

Dear All,

There has been some discussions recently about project Climate on 
Stackforge which aim is to provide host reservation services. This 
project is somehow related to 
https://wiki.openstack.org/wiki/WholeHostAllocation in that Climate 
intends to deal with the reservation part of dedicated resource pools 
called Pclouds in that blueprint. Climate wiki page can be found here 
https://wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api.


The purpose of that email is that a team at Mirantis is proposing to 
extend the scope of that service so that all sorts of resources 
(physical and virtual) could be reserved. The architectural approach of 
Mirantis is giving a first shot of this extension at 
https://wiki.openstack.org/wiki/Resource-reservation-service. We 
reviewed the proposal to evaluate how it fits with the initial use cases 
and objectives of Climate. However, as the scope is becoming much 
bigger, I thought we'd better bring the discussion to the open instead 
of discussing it in private so that everybody who a stake or general 
interest in that subject can chime in.


In the review below, I am referring to the Mirantis proposal at 
https://docs.google.com/document/d/1vsN9LLq9pEP4c5BJyfG8g2TecoPxWqIY4WMT1juNBPI/edit?pli=1


Here are four general comments/questions.

1. The primary assumption of Climate is that the role of the
   reservation service is to manage resource reservations and only
   resource reservations. This is because reserving a resource doesn't
   imply necessarily that the user wants to use it. In fact, as a user
   I may decide not to use a reservation at all and decide instead to
   resell it through some market place if that's possible. In its
   proposal, Mirantis specifies that the reservation service is also
   responsible for managing the life cycle of the reservations like
   starting instances when a lease becomes active. I foresee several
   situations where this behavior is not desirable like reserved
   instances to be launched upon some external conditions that can be
   time based or load based regardless of the lease terms. In this
   situation, this is typically not the reservation service but the
   auto-scaling service (Heat) who's in charge. So, is it planned in
   your design to make the life-cycle management part of the service
   optional or completely disabled if not needed?

2. The proposal specifies that the usage workflow is to first create a
   lease (with parameters including start and end dates) and then
   populate it with resource reservations requests to the Nova, Cinder,
   Neutron, ... APIs . I think this workflow can create two kinds of
   problems. First, a lease request could be synchronous or
   asynchronous but it should be transactional in my opinion. A second
   problem is that it probably won't work for physical host
   reservations since there is no support for that in Nova API.
   Originally, the idea of Climate is that the lease creation request
   contains the terms of the lease along with a specification of the
   type of resource (e.g. host capacity) to be reserved and the number
   of those. In the case of an immediate lease, the request would
   return success if the lease can be satisfied or failure otherwise.
   If successful, reservation is effective as soon as the lease
   creation request returns. This I think is a central principal both
   from a usability standpoint and an operational standpoint. What a
   user wants to know is whether a reservation can be granted in a
   all-or-nothing manner at the time he is asking the lease. Second, an
   operator wants to know what a lease entails operationally speaking
   both in terms of capacity availability and planing at the time a
   user requests the lease. Consequently, I think that the reservation
   API should allow a user to specify in the lease request the number
   and type of resource he wants to reserve along with the lease term
   parameters and that the system responds yes or no in a transactional
   manner.

3. The proposal specifies that a lease can contain a combo of different
   resources types reservations (instances, volumes, hosts, Heat
   stacks, ...) that can even be nested and that the reservation
   service will somehow orchestrate their deployment when the lease
   kicks in. In my opinion, many use cases (at least ours) do not
   warrant for that level of complexity and so, if that's something
   that is need to support your use cases, then it should be delivered
   as module that can be loaded optionally in the system. Our preferred
   approach is to use Heat for deployment orchestration.

4. The proposal specifies that the Reservation Scheduler notifies the
   Reservation Manager when a lease starts and ends. Do you intend to
   send those notifications directly or through Ceilometer? Reservation
   state changes are of general interest for operational and billing
   purposes. I also think that the Reservation Manager may 

Re: [openstack-dev] [Ceilometer] Alarming should be outside of Ceilometer as a separate package.

2013-08-02 Thread Eoghan Glynn

 On 08/01/2013 07:22 PM, Doug Hellmann wrote:
  
  
  
  On Thu, Aug 1, 2013 at 10:31 AM, Sandy Walsh sandy.wa...@rackspace.com
  mailto:sandy.wa...@rackspace.com wrote:
  
  Hey y'all,
  
  I've had a little thorn in my claw on this topic for a while and
  thought
  I'd ask the larger group.
  
  I applaud the efforts of the people working on the alarming additions
  to
  Ceilometer, but I've got concerns that we're packaging things the
  wrong way.
  
  I fear we're making another Zenoss/Nagios with Ceilometer. It's trying
  to do too much.
  
  The current trend in the monitoring work (#monitoringlove) is to build
  your own stack from a series of components. These components take in
  inputs, process them and spit out outputs.
  Collectors/Transformers/Publishers. This is the model CM is built on.
  
  Making an all-singing-all-dancing monolithic monitoring package is the
  old way of building these tools. People want to use best-of-breed
  components for their monitoring stack. I'd like to be able to use
  reimann.io http://reimann.io for my stream manager, diamond for my
  collector, logstash for
  my parser, etc. Alarming should just be another consumer.
  
  CM should do one thing well. Collect data from openstack, store and
  process them, and make them available to other systems via the API or
  publishers. That's all. It should understand these events better than
  any other product out there. It should be able to produce meaningful
  metrics/meters from these events.
  
  Alarming should be yet another consumer of the data CM produces. Done
  right, the If-This-Then-That nature of the alarming tool could be
  re-used by the orchestration team or perhaps even scheduling.
  Intertwining it with CM is making the whole thing too complex and rigid
  (imho).
  
  CM should be focused on extending our publishers and input plug-ins.
  
  I'd like to propose that alarming becomes its own project outside of
  Ceilometer. Or, at the very least, its own package, external of the
  Ceilometer code base. Perhaps it still lives under the CM moniker, but
  packaging-wise, I think it should live as a separate code base.
  
  
  It is currently implemented as a pair of daemons (one to monitor the
  alarm state, another to send the notifications). Both daemons use a
  ceilometer client to talk to the REST API to consume the sample data or
  get the alarm details, as required. It looks like alarms are triggered
  by sending RPC cast message, and that those in turn trigger the webhook
  invocation. That seems pretty loosely coupled, as far as the runtime
  goes. Granted, it's still in the main ceilometer code tree, but that
  doesn't say anything about how the distros will package it.
  
  I'll admit I haven't been closely involved in the development of this
  feature, so maybe my quick review of the code missed something that is
  bringing on this sentiment?
 
 No, you hit the nail on the head. It's nothing with the implementation,
 it's purely with the packaging and having it co-exist within ceilometer.
 Since it has its own services, uses Oslo, the CM client and operates via
 the public API, it should be able to live outside the main CM codebase.
 My concern is that it has a different mandate than CM (or the CM mandate
 is too broad).
 
 What really brought it on for me was doing code reviews for CM and
 hitting all this alarm stuff and thinking this is mental context switch
 from what CM does, it really doesn't belong here. (though I'm happy to
 help out with the reviews)
 
 -S

Hi Sandy,

In terms of distro packaging, the case that I'm most familiar (Fedora  
derivatives)
already splits out the ceilometer packaging in a fairly fine-grained manner 
(with
separate RPMs for the various services and agents). I'd envisage a similar 
packaging
approach will be followed for the alarming services, so for deployments for 
which
alarming is not required, this functionality won't be foisted on anyone.

Now we could think about splitting it out even further to aid the sort of 
composability
you desire, however this functionality is needed by Heat, so it makes sense for 
it to
live in one of the integrated projects (as opposed to a newly incubated 
split-off).

In terms of the context switching required for reviewing alarming patches, I 
don't
see that as a huge issue for two reasons:

 - larger projects such as nova already require immensely greater context 
switching
   of it's team of core devs

 - core devs can and do concentrate their reviewing on some functional 
sub-domains,
   i.e. everyone doesn't have to the entire spectrum of patches

That being said, the efforts of team members such as yourself who make an 
obvious
effort to review a wide range of patches, is of course appreciated!

Cheers,
Eoghan
 
 
  
  Doug
   
  
  
  Please, change my view. :)
  
  -S
  
 

Re: [openstack-dev] [taskflow] Taskflow Video Tutorial

2013-08-02 Thread Jay Pipes

Hi Jessica!

Unfortunately, I'm getting This account's public links are generating 
too much traffic and have been temporarily disabled! when I go to that 
link...


Is there an alternate location? I'm quite curious about the task flow 
library and am looking forward to watching the vid :)


Best,
-jay

On 08/01/2013 08:51 PM, Jessica Lucci wrote:

Hello all,

Provided here is a link to a quick overview and tutorial of the task
flow library. The video focuses on a distributed backend approach, but
still includes useful insight into the project as a whole. If you'd
rather just read the wiki and see examples, go ahead and skip to around
10:45. ;p

https://www.dropbox.com/s/kmpypmi95pk2taw/taskflow.mov

Useful links for the project:
https://wiki.openstack.org/wiki/TaskFlow
https://launchpad.net/taskflow

Thanks all!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] New Bug tags in commit messages

2013-08-02 Thread James E. Blair
Hi,

Anthony Dodd has recently implemented some cool new features that we
discussed at the summit -- driving more automation from commit messages.
Here's what you need to know to use the new features:

Use header style references when referencing a bug in your commit
log. The following styles are now supported and recommended [1]:

Closes-Bug: #1234567 -- use 'Closes-Bug' if the commit is intended to
fully fix and close the bug being referenced.

Partial-Bug: #1234567 -- use 'Partial-Bug' if the commit is only a
partial fix and more work is needed.

Related-Bug: #1234567 -- use 'Related-Bug' if the commit is merely
related to the referenced bug.

While it is perfectly fine to reference a bug at any point within your
commit log, in order for proper automation to take place, ensure that
you reference your bugs on their own line, and preferably at the bottom
of the commit log near the Change-Id header as prescribed in our wiki
[2].

The Regular Expression which we use to parse commit logs for bug
references is case-insensitive. Using the header 'closes-bug' is
identical to using 'Closes-Bug' in terms of the automation it will
affect.

If your fix spans multiple commits, then simply use the 'Partial-Bug'
header when you reference your bug. Then, when you are ready to close
the bug with a final commit, use the 'Closes-Bug' header.

If you are having a lot of difficulty remembering to use the recommended
header styles, have no fear! Referencing your bugs the old school way
still works. That is:

bug #123454321 -- this will invoke the 'Closes-Bug' functionality.

fixes bug: #123454321 -- this will invoke the 'Closes-Bug'
functionality.

resolves bug: #123454321 -- this will invoke the 'Closes-Bug'
functionality.

Supplying an unknown bug header—such as 'Mega-Bug: #123454321' -- will
simply invoke the 'Related-Bug' functionality for safety reasons.

[1] For original summit discussion, see
https://etherpad.openstack.org/drive-automation-from-commitmsg

[2] https://wiki.openstack.org/wiki/GitCommitMessages

Thanks again, Anthony, for doing this!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Taskflow Video Tutorial

2013-08-02 Thread Jessica Lucci
Yes - sorry about that. Wasn't thinking ahead when I uploaded the video. :p

You can view it on youtube here: http://www.youtube.com/watch?v=SJLc3U-KYxQ



On Aug 2, 2013, at 10:49 AM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com
 wrote:

Hi Jessica!

Unfortunately, I'm getting This account's public links are generating too much 
traffic and have been temporarily disabled! when I go to that link...

Is there an alternate location? I'm quite curious about the task flow library 
and am looking forward to watching the vid :)

Best,
-jay

On 08/01/2013 08:51 PM, Jessica Lucci wrote:
Hello all,

Provided here is a link to a quick overview and tutorial of the task
flow library. The video focuses on a distributed backend approach, but
still includes useful insight into the project as a whole. If you'd
rather just read the wiki and see examples, go ahead and skip to around
10:45. ;p

https://www.dropbox.com/s/kmpypmi95pk2taw/taskflow.mov

Useful links for the project:
https://wiki.openstack.org/wiki/TaskFlow
https://launchpad.net/taskflow

Thanks all!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] New Bug tags in commit messages

2013-08-02 Thread Mark McLoughlin
On Fri, 2013-08-02 at 09:00 -0700, James E. Blair wrote:
 Hi,
 
 Anthony Dodd has recently implemented some cool new features that we
 discussed at the summit -- driving more automation from commit messages.
 Here's what you need to know to use the new features:
 
 Use header style references when referencing a bug in your commit
 log. The following styles are now supported and recommended [1]:
 
 Closes-Bug: #1234567 -- use 'Closes-Bug' if the commit is intended to
 fully fix and close the bug being referenced.
 
 Partial-Bug: #1234567 -- use 'Partial-Bug' if the commit is only a
 partial fix and more work is needed.
 
 Related-Bug: #1234567 -- use 'Related-Bug' if the commit is merely
 related to the referenced bug.
 
 While it is perfectly fine to reference a bug at any point within your
 commit log, in order for proper automation to take place, ensure that
 you reference your bugs on their own line, and preferably at the bottom
 of the commit log near the Change-Id header as prescribed in our wiki
 [2].
 
 The Regular Expression which we use to parse commit logs for bug
 references is case-insensitive. Using the header 'closes-bug' is
 identical to using 'Closes-Bug' in terms of the automation it will
 affect.
 
 If your fix spans multiple commits, then simply use the 'Partial-Bug'
 header when you reference your bug. Then, when you are ready to close
 the bug with a final commit, use the 'Closes-Bug' header.
 
 If you are having a lot of difficulty remembering to use the recommended
 header styles, have no fear! Referencing your bugs the old school way
 still works. That is:
 
 bug #123454321 -- this will invoke the 'Closes-Bug' functionality.
 
 fixes bug: #123454321 -- this will invoke the 'Closes-Bug'
 functionality.
 
 resolves bug: #123454321 -- this will invoke the 'Closes-Bug'
 functionality.
 
 Supplying an unknown bug header—such as 'Mega-Bug: #123454321' -- will
 simply invoke the 'Related-Bug' functionality for safety reasons.
 
 [1] For original summit discussion, see
 https://etherpad.openstack.org/drive-automation-from-commitmsg
 
 [2] https://wiki.openstack.org/wiki/GitCommitMessages
 
 Thanks again, Anthony, for doing this!

Nice! This is going to be really useful, especially the ability to
mention bugs without the commit being seen as a fix for the bug, and the
ability to say a commit is a partial fix.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] New Bug tags in commit messages

2013-08-02 Thread Anita Kuno

On 13-08-02 12:13 PM, Mark McLoughlin wrote:

On Fri, 2013-08-02 at 09:00 -0700, James E. Blair wrote:

Hi,

Anthony Dodd has recently implemented some cool new features that we
discussed at the summit -- driving more automation from commit messages.
Here's what you need to know to use the new features:

Use header style references when referencing a bug in your commit
log. The following styles are now supported and recommended [1]:

 Closes-Bug: #1234567 -- use 'Closes-Bug' if the commit is intended to
 fully fix and close the bug being referenced.

 Partial-Bug: #1234567 -- use 'Partial-Bug' if the commit is only a
 partial fix and more work is needed.

 Related-Bug: #1234567 -- use 'Related-Bug' if the commit is merely
 related to the referenced bug.

While it is perfectly fine to reference a bug at any point within your
commit log, in order for proper automation to take place, ensure that
you reference your bugs on their own line, and preferably at the bottom
of the commit log near the Change-Id header as prescribed in our wiki
[2].

The Regular Expression which we use to parse commit logs for bug
references is case-insensitive. Using the header 'closes-bug' is
identical to using 'Closes-Bug' in terms of the automation it will
affect.

If your fix spans multiple commits, then simply use the 'Partial-Bug'
header when you reference your bug. Then, when you are ready to close
the bug with a final commit, use the 'Closes-Bug' header.

If you are having a lot of difficulty remembering to use the recommended
header styles, have no fear! Referencing your bugs the old school way
still works. That is:

 bug #123454321 -- this will invoke the 'Closes-Bug' functionality.

 fixes bug: #123454321 -- this will invoke the 'Closes-Bug'
 functionality.

 resolves bug: #123454321 -- this will invoke the 'Closes-Bug'
 functionality.

 Supplying an unknown bug header—such as 'Mega-Bug: #123454321' -- will
 simply invoke the 'Related-Bug' functionality for safety reasons.

[1] For original summit discussion, see
 https://etherpad.openstack.org/drive-automation-from-commitmsg

[2] https://wiki.openstack.org/wiki/GitCommitMessages

Thanks again, Anthony, for doing this!

Nice! This is going to be really useful, especially the ability to
mention bugs without the commit being seen as a fix for the bug, and the
ability to say a commit is a partial fix.

Cheers,
Mark.

Nice work, Anthony.

Now the next step is getting people to use it.

Well done,
Anita.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Alarming should be outside of Ceilometer as a separate package.

2013-08-02 Thread Sandy Walsh


On 08/02/2013 12:27 PM, Eoghan Glynn wrote:
 
 On 08/01/2013 07:22 PM, Doug Hellmann wrote:



 On Thu, Aug 1, 2013 at 10:31 AM, Sandy Walsh sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com wrote:

 Hey y'all,

 I've had a little thorn in my claw on this topic for a while and
 thought
 I'd ask the larger group.

 I applaud the efforts of the people working on the alarming additions
 to
 Ceilometer, but I've got concerns that we're packaging things the
 wrong way.

 I fear we're making another Zenoss/Nagios with Ceilometer. It's trying
 to do too much.

 The current trend in the monitoring work (#monitoringlove) is to build
 your own stack from a series of components. These components take in
 inputs, process them and spit out outputs.
 Collectors/Transformers/Publishers. This is the model CM is built on.

 Making an all-singing-all-dancing monolithic monitoring package is the
 old way of building these tools. People want to use best-of-breed
 components for their monitoring stack. I'd like to be able to use
 reimann.io http://reimann.io for my stream manager, diamond for my
 collector, logstash for
 my parser, etc. Alarming should just be another consumer.

 CM should do one thing well. Collect data from openstack, store and
 process them, and make them available to other systems via the API or
 publishers. That's all. It should understand these events better than
 any other product out there. It should be able to produce meaningful
 metrics/meters from these events.

 Alarming should be yet another consumer of the data CM produces. Done
 right, the If-This-Then-That nature of the alarming tool could be
 re-used by the orchestration team or perhaps even scheduling.
 Intertwining it with CM is making the whole thing too complex and rigid
 (imho).

 CM should be focused on extending our publishers and input plug-ins.

 I'd like to propose that alarming becomes its own project outside of
 Ceilometer. Or, at the very least, its own package, external of the
 Ceilometer code base. Perhaps it still lives under the CM moniker, but
 packaging-wise, I think it should live as a separate code base.


 It is currently implemented as a pair of daemons (one to monitor the
 alarm state, another to send the notifications). Both daemons use a
 ceilometer client to talk to the REST API to consume the sample data or
 get the alarm details, as required. It looks like alarms are triggered
 by sending RPC cast message, and that those in turn trigger the webhook
 invocation. That seems pretty loosely coupled, as far as the runtime
 goes. Granted, it's still in the main ceilometer code tree, but that
 doesn't say anything about how the distros will package it.

 I'll admit I haven't been closely involved in the development of this
 feature, so maybe my quick review of the code missed something that is
 bringing on this sentiment?

 No, you hit the nail on the head. It's nothing with the implementation,
 it's purely with the packaging and having it co-exist within ceilometer.
 Since it has its own services, uses Oslo, the CM client and operates via
 the public API, it should be able to live outside the main CM codebase.
 My concern is that it has a different mandate than CM (or the CM mandate
 is too broad).

 What really brought it on for me was doing code reviews for CM and
 hitting all this alarm stuff and thinking this is mental context switch
 from what CM does, it really doesn't belong here. (though I'm happy to
 help out with the reviews)

 -S
 
 Hi Sandy,
 
 In terms of distro packaging, the case that I'm most familiar (Fedora  
 derivatives)
 already splits out the ceilometer packaging in a fairly fine-grained manner 
 (with
 separate RPMs for the various services and agents). I'd envisage a similar 
 packaging
 approach will be followed for the alarming services, so for deployments for 
 which
 alarming is not required, this functionality won't be foisted on anyone.

Thanks for the feedback Eoghan.

I don't imagine that should be a big problem. Packaging in the sense of
the code base is different issue. If, for all intents and purposes,
alarming is a separate system: uses external api's, only uses sanctioned
CM client libraries, is distro packaged separately and optionally
installed/deployed then I don't understand why it has to live in the CM
codebase?

 Now we could think about splitting it out even further to aid the sort of 
 composability
 you desire, however this functionality is needed by Heat, so it makes sense 
 for it to
 live in one of the integrated projects (as opposed to a newly incubated 
 split-off).

I'm not purposing it becomes a newly incubated project. I think it makes
sense to live under the CM moniker. But code-wise, it should be a
separate repo. This wouldn't pose any problem since -infra already does
this with its many repos.

 In terms of the 

Re: [openstack-dev] [Neutron] devstack + neutron fails on firewall_driver

2013-08-02 Thread James Kyle
Following up on my own thread, the fix can be integrated into ./stack.sh by 
adding this to the localrc:

 # FIXES: https://bugs.launchpad.net/neutron/+bug/1206013
 OSLOCFG_REPO=https://github.com/openstack/oslo.config.git
 OSLOCFG_BRANCH=1.2.0a3


If you've already run stack, might have to set RECLONE too.

cheers,

-james

On Aug 1, 2013, at 3:25 PM, James Kyle ja...@jameskyle.org wrote:

 Sorry, I cleaned them out after resolving. 
 
 Kyle helped me with the bug:
 
 This fixed it for me:
 
 cd /usr/local/lib/python2.7/dist-packages
 sudo rm -rf oslo*
 sudo pip install --upgrade 
 http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2.0a3
 
 It's tracked in the launchpad issue 
 https://bugs.launchpad.net/neutron/+bug/1206013
 
 -james
 
 On Aug 1, 2013, at 2:48 PM, Rajesh Mohan rajesh.mli...@gmail.com wrote:
 
 The links for the error messages are failing for me.
 
 Can you send me the actual error message?
 
 Thanks.
 
 
 
 On Thu, Aug 1, 2013 at 9:51 AM, James Kyle ja...@jameskyle.org wrote:
 Morning,
 
 I'm having some issues getting devstack + neutron going.
 
 If I don't use Q_USE_DEBUG_COMMAND, it completes and notes that the l3 agent 
 has failed to start. Here's a paste of the stack.sh error and the stack 
 trace from q-l3 = https://gist.github.com/jameskyle/6133049
 
 If I do use the debug command, this is the error/logs I get: 
 https://gist.github.com/jameskyle/6133125
 
 And this is my localrc https://gist.github.com/jameskyle/6133134
 
 Currently running on an ubuntu 12.04 vm.
 
 The behavior is somewhat recent, last week or so. But I can't find any 
 references to the issue in bugs.
 
 Thanks for any input!
 
 Cheers,
 
 -james
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Taskflow Video Tutorial

2013-08-02 Thread Jay Pipes

Thanks much!

On 08/02/2013 12:06 PM, Jessica Lucci wrote:

Yes - sorry about that. Wasn't thinking ahead when I uploaded the video. :p

You can view it on youtube here: http://www.youtube.com/watch?v=SJLc3U-KYxQ


*
*
On Aug 2, 2013, at 10:49 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com
  wrote:


Hi Jessica!

Unfortunately, I'm getting This account's public links are generating
too much traffic and have been temporarily disabled! when I go to
that link...

Is there an alternate location? I'm quite curious about the task flow
library and am looking forward to watching the vid :)

Best,
-jay

On 08/01/2013 08:51 PM, Jessica Lucci wrote:

Hello all,

Provided here is a link to a quick overview and tutorial of the task
flow library. The video focuses on a distributed backend approach, but
still includes useful insight into the project as a whole. If you'd
rather just read the wiki and see examples, go ahead and skip to around
10:45. ;p

https://www.dropbox.com/s/kmpypmi95pk2taw/taskflow.mov

Useful links for the project:
https://wiki.openstack.org/wiki/TaskFlow
https://launchpad.net/taskflow

Thanks all!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Dan Smith
 Any solution where you need to modify sudoers every time the code
 changes is painful, because there is only one sudo configuration on a
 machine and it's owned by root.

Hmm? At least on ubuntu there is a default /etc/sudoers.d directory,
where we could land per-service files like nova-compute.conf,
nova-network.conf, etc. I don't think that's there by default on Fedora
or RHEL, but adding the includedir to the base config works as expected.

 The end result was that the sudoers file were not maintained and
 everyone ran and tested with a convenient blanket-permission sudoers
 file.

Last I checked, The nova rootwrap policy includes blanket approvals for
things like chmod, which pretty much eliminates any sort of expectation
of reasonable security without improvement by the operator (which I
think is unrealistic).

I'm not sure what the right answer is here. I'm a little afraid of a
rootwrap daemon. However, nova-network choking on 50 instances seems to
be obviously not an option...

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review request: Blurprint of API validation

2013-08-02 Thread Russell Bryant
On 07/09/2013 07:45 AM, Ken'ichi Ohmichi wrote:
 
 Hi,
 
 The blueprint nova-api-validation-fw has not been approved yet.
 I hope the core patch of this blueprint is merged to Havana-2,
 because of completing comprehensive API validation of Nova v3 API
 for Havana release. What should we do for it?

I apologize for taking so long to address this.

Here is my current take on this based on reviewing discussions, code,
and talking to others about it.

From a high level, API input validation is obviously a good thing.
Having a common framework to do it is better.  What complicates this
submission is the effort to standardize on Pecan/WSME for APIs
throughout OpenStack.

We've discussed WSME and jsonschema on the mailing list.  There are
perhaps some things that can be expressed using jsonschema, but not WSME
today.  So, there are some notes on
https://etherpad.openstack.org/NovaApiValidationFramework showing how
the two could be used together at some point.  However, I don't think
it's really desirable long term.  It seems a bit awkward, and some
information gets duplicated.

We had previously established that using WSME was the long term goal
here.  Going forward with jsonschema with the current nova APIs is a
benefit in the short term, but I do not think it's necessarily in
support of the long term goal if there isn't consensus that combining
WSME+jsonschema is a good idea.

This sort of thing affects a lot of code, so the direction is important.
 I do not think we should proceed with this.  It seems like the best
thing to do that helps the long term goal is to work on migrating our
API to WSME.   In particular, I think we could do this for the v3 API,
since it's not going to be locked down until Icehouse.  At the same
time, we should contribute back to WSME to add the features we feel are
missing to allow the types of validation we would like to do.

If there is significant disagreement with this decision, I'm happy to
continue talking about it.  However, I really want to see consensus on
this and how it fits in with the long term goals before moving forward.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] - Voting for New UX Discussion Tool Started

2013-08-02 Thread Jaromir Coufal

Hi folks,

UX community for OpenStack 
(https://plus.google.com/u/0/communities/100954512393463248122) is 
looking for new place for UX related discussions. Current format of 
Google+ is bringing us lot of issues, which we are trying to resolve 
with new tool, where developers/designers can ask UX related questions, 
where discussions around proposals can happen and where we can find the 
best solutions for brought up issues.


Finally, after longer discussions (about discussions :)), we put 
together long list of possible solutions for problems we are 
experiencing on G+ pages .


In the survey, you can find a summary, and you can have a look on 
existing running instances of these tools. I would like everybody with 
preference to vote. There is one week deadline so everybody has enough 
time to speak up.


http://www.surveymonkey.com/s/MNGV8D5
Deadline: August 9, 23:59 GMT

Cheers
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review request: Blurprint of API validation

2013-08-02 Thread Doug Hellmann
On Fri, Aug 2, 2013 at 4:35 PM, Russell Bryant rbry...@redhat.com wrote:

 On 07/09/2013 07:45 AM, Ken'ichi Ohmichi wrote:
 
  Hi,
 
  The blueprint nova-api-validation-fw has not been approved yet.
  I hope the core patch of this blueprint is merged to Havana-2,
  because of completing comprehensive API validation of Nova v3 API
  for Havana release. What should we do for it?

 I apologize for taking so long to address this.

 Here is my current take on this based on reviewing discussions, code,
 and talking to others about it.

 From a high level, API input validation is obviously a good thing.
 Having a common framework to do it is better.  What complicates this
 submission is the effort to standardize on Pecan/WSME for APIs
 throughout OpenStack.

 We've discussed WSME and jsonschema on the mailing list.  There are
 perhaps some things that can be expressed using jsonschema, but not WSME
 today.  So, there are some notes on
 https://etherpad.openstack.org/NovaApiValidationFramework showing how
 the two could be used together at some point.  However, I don't think
 it's really desirable long term.  It seems a bit awkward, and some
 information gets duplicated.

 We had previously established that using WSME was the long term goal
 here.  Going forward with jsonschema with the current nova APIs is a
 benefit in the short term, but I do not think it's necessarily in
 support of the long term goal if there isn't consensus that combining
 WSME+jsonschema is a good idea.

 This sort of thing affects a lot of code, so the direction is important.
  I do not think we should proceed with this.  It seems like the best
 thing to do that helps the long term goal is to work on migrating our
 API to WSME.   In particular, I think we could do this for the v3 API,
 since it's not going to be locked down until Icehouse.  At the same
 time, we should contribute back to WSME to add the features we feel are
 missing to allow the types of validation we would like to do.

 If there is significant disagreement with this decision, I'm happy to
 continue talking about it.  However, I really want to see consensus on
 this and how it fits in with the long term goals before moving forward.


When we discussed this earlier, there was concern about moving to a
completely new toolset for the new API in Havana because of other changes
going on at the same time (something to do with extensions, IIRC). I agreed
it made sense to stick with our current tools to avoid adding risk to the
schedule. If that schedule has slipped into the next release, or if you
feel there is time after all, then I would also prefer to go ahead with the
general consensus reached at the Havana summit and use WSME.

Given a little time, I think we can come up with something better than the
method of combining WSME and jsonschema proposed in the etherpad linked
above, which effectively requires us to declare the types of the parameters
twice in different formats. As Russell said, if we need to add to WSME to
make it easier to use, we should do that.

I am working on getting WSME onto stackforge, to make contributions easier
(it's on bitbucket now, but using hg and pull requests is pretty different
from our normal review process and may add friction for some people). We
ran into a few tricky spots because of the wide variety of test
configurations in play, and that caused some delays. I think those issues
are worked out (especially with the Python 3.3 build systems available
now), so I will be picking that work up in a week or two (I'm traveling
next week).

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review request: Blurprint of API validation

2013-08-02 Thread Russell Bryant
On 08/02/2013 05:13 PM, Doug Hellmann wrote:
 When we discussed this earlier, there was concern about moving to a
 completely new toolset for the new API in Havana because of other
 changes going on at the same time (something to do with extensions,
 IIRC). I agreed it made sense to stick with our current tools to avoid
 adding risk to the schedule. If that schedule has slipped into the next
 release, or if you feel there is time after all, then I would also
 prefer to go ahead with the general consensus reached at the Havana
 summit and use WSME.

The Nova v3 API schedule has slipped.  A huge amount of progress has
been made, but it's going to be marked experimental in Havana.  We're
going to wrap it up for Icehouse.  So, there's another release cycle
available to work on the v3 API infrastructure.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][glance] Future of nova's image API

2013-08-02 Thread Joe Gordon
Hi All,

even though Glance, has been pulled out of Nova years ago, Nova still has a
images API that proxies back to Glance.  Since Nova is in the process of
creating a new, V3, API, we know have a chance to re-evaluate this API.

* Do we still need this in Nova, is there any reason to not just use Glance
directly?  I have vague concerns about making Glance API publicly
accessible, but I am not sure what the underlying reason is
* If it is still needed in Nova today, can we remove it in the future and
if so what is the timeline?

best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Event API Access Controls

2013-08-02 Thread Herndon, John Luke (HPCS - Ft. Collins)
Hello, I'm currently implementing the event api blueprint[0], and am
wondering what access controls we should impose on the event api. The
purpose of the blueprint is to provide a StackTach equivalent in the
ceilometer api. I believe that StackTach is used as an internal tool which
end with no access to end users. Given that the event api is targeted at
administrators, I am currently thinking that it should be limited to admin
users only. However, I wanted to ask for input on this topic. Any arguments
for opening it up so users can look at events for their resources? Any
arguments for not doing so? PS -I'm new to the ceilometer project, so let me
introduce myself. My name is John Herndon, and I work for HP. I've been
freed up from a different project and will be working on ceilometer. Thanks,
looking forward to working with everyone! -john  0:
https://blueprints.launchpad.net/ceilometer/+spec/specify-event-api



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Split Backend LDAP Question

2013-08-02 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello,

With some minor tweaking of the keystone common/ldap/core.py file, I have been 
able to authenticate and get an unscoped token for a user from an LDAP 
Enterprise Directory. I want to continue testing but I have some questions that 
need to be answered before I can continue.


1.   Do I need to add the user from the LDAP server to the Keystone SQL 
database or will the H-2 code search the LDAP server?

2.   When I performed a keystone user-list the following log file entries 
were written indicating that keystone was attempting to get all the users on 
the massive Enterprise Directory. How do we limit this query to just the one 
user or group of users we are interested in?

2013-07-23 14:04:31DEBUG [keystone.common.ldap.core] LDAP bind: 
dn=cn=CloudOSKeystoneDev, ou=Applications, o=hp.com
2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] In get_connection 6 
user: cn=CloudOSKeystoneDev, ou=Applications, o=hp.com
2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] MY query in 
_ldap_get_all: ()
  2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] LDAP search: 
dn=ou=People,o=hp.com, scope=2, query=(), attrs=['businessCategory', 
'userPassword', 'hpStatus', 'mail', 'uid']

3.   Next I want to acquire a scoped token. How do I assign the LDAP user 
to a local project?

Regards,

Mark Miller
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Enabling neutron gating

2013-08-02 Thread Nachi Ueno
Hi Folks

It looks like neutron gating error improves as much as non-neutron gating one,
so I would like to suggest to enable neturon-gating again.

This is 12 hours failure rate in 2013-08-01.

gate-tempest-devstack-vm-full:18.75%
gate-tempest-devstack-vm-neutron:13.21%

There are graphs

[1] neturon-gating
http://graphite.openstack.org/render/?width=586height=308_salt=1375457125.913target=alias%28summarize%28stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-neutron.FAILURE%2C%221h%22%29%2C%20%27Failure%27%29target=alias%28summarize%28stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-neutron.SUCCESS%2C%221h%22%29%2C%20%27Success%27%29title=Neutron%20Gate%20Results%20per%20Hour

(you can see the rate goes improved after Fry 12:00 )

[2]non-neutron-gating
http://graphite.openstack.org/render/?width=586height=308_salt=1375457125.913target=alias%28summarize%28stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-full.FAILURE%2C%221h%22%29%2C%20%27Failure%27%29target=alias%28summarize%28stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-full.SUCCESS%2C%221h%22%29%2C%20%27Success%27%29title=Non-Neutron%20Gate%20Results%20per%20Hour

List of Problems and stateus

Bug1: https://bugs.launchpad.net/neutron/+bug/1194026 [Parcially fixed]
Cause: Unknown but rate is improved after the first fix
How to reproduce: Unknown

Bug2: https://bugs.launchpad.net/neutron/+bug/1206307 [Fixed]
Cause: PBR issue
How to reproduce: run devstack

Bug3:https://bugs.launchpad.net/neutron/+bug/1207541 [Fixed]
Cause: oslo changes
Fix https://review.openstack.org/#/c/39815/

Since we got 2bugs after the disabled voting in neutron,
I would like to enable it again earlier.
# We can disable the rate after it start failing again.

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review request: Blurprint of API validation

2013-08-02 Thread Doug Hellmann
On Fri, Aug 2, 2013 at 5:19 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/02/2013 05:13 PM, Doug Hellmann wrote:
  When we discussed this earlier, there was concern about moving to a
  completely new toolset for the new API in Havana because of other
  changes going on at the same time (something to do with extensions,
  IIRC). I agreed it made sense to stick with our current tools to avoid
  adding risk to the schedule. If that schedule has slipped into the next
  release, or if you feel there is time after all, then I would also
  prefer to go ahead with the general consensus reached at the Havana
  summit and use WSME.

 The Nova v3 API schedule has slipped.  A huge amount of progress has
 been made, but it's going to be marked experimental in Havana.  We're
 going to wrap it up for Icehouse.  So, there's another release cycle
 available to work on the v3 API infrastructure.


Sounds good.

Doug



 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nomination to add Chris Krelle to ironic core

2013-08-02 Thread Devananda van der Veen
It's official -- welcome, Chris!


On Wed, Jul 31, 2013 at 6:57 PM, Wentian Jiang went...@unitedstack.comwrote:

 Chris +1


 On Thu, Aug 1, 2013 at 4:17 AM, Joe Gordon joe.gord...@gmail.com wrote:

 +1


 On Wed, Jul 31, 2013 at 9:41 AM, Lucas Alvares Gomes 
 lucasago...@gmail.com wrote:

 +1

 On Wed, Jul 31, 2013 at 5:29 PM, Anita Kuno ak...@lavabit.com wrote:
  I agree too.
 
 
  On 13-07-31 12:20 PM, Ghe Rivero wrote:
 
  +1
 
 
  On Wed, Jul 31, 2013 at 6:10 PM, Devananda van der Veen
  devananda@gmail.com wrote:
 
  Hi,
 
  I'd like to propose to add Chris (NobodyCam) to ironic-core. He has
 been
  doing a lot of good reviews and running the weekly meetings when I've
 been
  unavailable due to travel.
 
 
  Cheers,
  Devananda
 
 
  **
  http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
 
  +---+---+
  |Reviewer   | Reviews (-2|-1|+1|+2) (+/- ratio) |
  +---+---+
  |  devananda ** |  175 (5|40|0|130) (74.3%) |
  |   nobodycam   |   98 (0|36|62|0) (63.3%)  |
  |anteaya|   81 (0|15|66|0) (81.5%)  |
  |  lucasagomes  |   38 (0|7|31|0) (81.6%)   |
  | prykhodchenko |   28 (0|10|18|0) (64.3%)  |
  |   jiangwt100  |   19 (0|1|18|0) (94.7%)   |
  |  lifeless **  |   15 (0|12|0|3) (20.0%)   |
  |  jogo |7 (0|4|3|0) (42.9%)|
  |   sdague **   |7 (0|4|0|3) (42.9%)|
  |mtaylor| 4 (0|4|0|0) (0.0%)|
  | markmc|2 (0|1|1|0) (50.0%)|
  | yuriyz| 1 (0|1|0|0) (0.0%)|
  |   ghe.rivero  |1 (0|0|1|0) (100.0%)   |
  | eglynn|1 (0|0|1|0) (100.0%)   |
  | doug-hellmann |1 (0|0|1|0) (100.0%)   |
  | dmllr |1 (0|0|1|0) (100.0%)   |
  | danms |1 (0|0|1|0) (100.0%)   |
  +---+---+
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Pinky: Gee, Brain, what do you want to do tonight?
  The Brain: The same thing we do every night, Pinky—try to take over
 the
  world!
 
   .''`.  Pienso, Luego Incordio
  : :' :
  `. `'
`-www.debian.orgwww.openstack.com
 
  GPG Key: 26F020F7
  GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Wentian Jiang
 UnitedStack Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] devstack + neutron fails on firewall_driver

2013-08-02 Thread Monty Taylor


On 08/02/2013 01:06 PM, James Kyle wrote:
 Following up on my own thread, the fix can be integrated into
 ../stack.sh by adding this to the localrc:
 
 # FIXES: https://bugs.launchpad.net/neutron/+bug/1206013
 OSLOCFG_REPO=https://github.com/openstack/oslo.config.git
 OSLOCFG_BRANCH=1.2.0a3

Just so you know - Sean and I have been working on a more systemic
solution so that this problem goes away and that we prevent this from
happening again.

 If you've already run stack, might have to set RECLONE too.
 
 cheers,
 
 -james
 
 On Aug 1, 2013, at 3:25 PM, James Kyle ja...@jameskyle.org
 mailto:ja...@jameskyle.org wrote:
 
 Sorry, I cleaned them out after resolving. 

 Kyle helped me with the bug:

 This fixed it for me:

 cd /usr/local/lib/python2.7/dist-packages
 sudo rm -rf oslo*
 sudo pip install
 --upgrade 
 http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2.0a3

 It's tracked in the launchpad issue
 https://bugs.launchpad.net/neutron/+bug/1206013

 -james

 On Aug 1, 2013, at 2:48 PM, Rajesh Mohan rajesh.mli...@gmail.com
 mailto:rajesh.mli...@gmail.com wrote:

 The links for the error messages are failing for me.

 Can you send me the actual error message?

 Thanks.



 On Thu, Aug 1, 2013 at 9:51 AM, James Kyle ja...@jameskyle.org
 mailto:ja...@jameskyle.org wrote:

 Morning,

 I'm having some issues getting devstack + neutron going.

 If I don't use Q_USE_DEBUG_COMMAND, it completes and notes that
 the l3 agent has failed to start. Here's a paste of the stack.sh
 error and the stack trace from q-l3 =
 https://gist.github.com/jameskyle/6133049

 If I do use the debug command, this is the error/logs I get:
 https://gist.github.com/jameskyle/6133125

 And this is my localrc https://gist.github.com/jameskyle/6133134

 Currently running on an ubuntu 12.04 vm.

 The behavior is somewhat recent, last week or so. But I can't
 find any references to the issue in bugs.

 Thanks for any input!

 Cheers,

 -james
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-02 Thread Monty Taylor


On 08/02/2013 05:23 PM, Joe Gordon wrote:
 Hi All,
 
 even though Glance, has been pulled out of Nova years ago, Nova still
 has a images API that proxies back to Glance.  Since Nova is in the
 process of creating a new, V3, API, we know have a chance to re-evaluate
 this API. 
 
 * Do we still need this in Nova, is there any reason to not just use
 Glance directly?  I have vague concerns about making Glance API publicly
 accessible, but I am not sure what the underlying reason is

I want the glance API everywhere publicly right now!!! :)

 * If it is still needed in Nova today, can we remove it in the future
 and if so what is the timeline?

Honestly, I think we should ditch it. Glance is our image service, not
nova, we should use it. For user-experience stuff,
python-openstackclient should be an excellent way to expose both through
a single tool without needing to proxy one service through another.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] FWaaS: Support for explicit commit

2013-08-02 Thread Sumit Naiksatam
Hi All,

In Neutron Firewall as a Service (FWaaS), we currently support an
implicit commit mode, wherein a change made to a firewall_rule is
propagated immediately to all the firewalls that use this rule (via
the firewall_policy association), and the rule gets applied in the
backend firewalls. This might be acceptable, however this is different
from the explicit commit semantics which most firewalls support.
Having an explicit commit operation ensures that multiple rules can be
applied atomically, as opposed to in the implicit case where each rule
is applied atomically and thus opens up the possibility of security
holes between two successive rule applications.

So the proposal here is quite simple -

* When any changes are made to the firewall_rules
(added/deleted/updated), no changes will happen on the firewall (only
the corresponding firewall_rule resources are modified).

* We will support an explicit commit operation on the firewall
resource. Any changes made to the rules since the last commit will now
be applied to the firewall when this commit operation is invoked.

* A show operation on the firewall will show a list of the currently
committed rules, and also the pending changes.

Kindly respond if you have any comments on this.

Thanks,
~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-02 Thread Christopher Yeoh
Hi Joe,


​
​Am on my phone so can't find the links at the moment but there was some 
discussion around this when working out what we should leave out of the v3 api. 
Some people had concerns about exposing the glance api publicly and so wanted 
to retain the images support in Nova.


​
​So the consensus seemed to be to leave the images support in, but to demote it 
from core. So people who don't want it exclude the os-images extension.




​Just as I write this I've realised that the servers api currently returns 
links to the image used for the instance. And that won't be valid if the images 
extension is not loaded. So probably have some work to do there to support  
that properly.


Regards,


Chris 








On Sat, Aug 3, 2013 at 6:54 AM, Joe Gordon 
joe.gord...@gmail.com=mailto:joe.gord...@gmail.com; wrote:
Hi All,

even though Glance, has been pulled out of Nova years ago, Nova still has a 
images API that proxies back to Glance.  Since Nova is in the process of 
creating a new, V3, API, we know have a chance to re-evaluate this API. 




* Do we still need this in Nova, is there any reason to not just use Glance 
directly?  I have vague concerns about making Glance API publicly accessible, 
but I am not sure what the underlying reason is


* If it is still needed in Nova today, can we remove it in the future and if so 
what is the timeline?


best,
Joe Gordon___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review request: Blurprint of API validation

2013-08-02 Thread Christopher Yeoh
On Sat, Aug 3, 2013 at 9:16 AM, Doug Hellmann 
doug.hellm...@dreamhost.com=mailto:doug.hellm...@dreamhost.com; wrote:



On Fri, Aug 2, 2013 at 5:19 PM, Russell Bryant rbry...@redhat.com wrote:
On 08/02/2013 05:13 PM, Doug Hellmann wrote:

 When we discussed this earlier, there was concern about moving to a

 completely new toolset for the new API in Havana because of other

 changes going on at the same time (something to do with extensions,

 IIRC). I agreed it made sense to stick with our current tools to avoid

 adding risk to the schedule. If that schedule has slipped into the next

 release, or if you feel there is time after all, then I would also

 prefer to go ahead with the general consensus reached at the Havana

 summit and use WSME.


The Nova v3 API schedule has slipped.  A huge amount of progress has

been made, but it's going to be marked experimental in Havana.  We're

going to wrap it up for Icehouse.  So, there's another release cycle

available to work on the v3 API infrastructure.


Sounds good.






​
​I think the critical bit will be if we can move to wsme with minimal to no 
changes to the  code for the extensions. Or at least a way that it can be done 
in-place easily without breaking everything. Eg some transition phase where 
wsme can support the wsgi api temporarily. 


Otherwise we're in for either a giant patch which will be hard to merge or a 
similar rather painful process to what we've had to do for the extension 
framework changes in Havana.


​
​Chris___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-08-02 Thread Joe Gordon
On Fri, Aug 2, 2013 at 10:33 AM, Dan Smith d...@danplanet.com wrote:

  Any solution where you need to modify sudoers every time the code
  changes is painful, because there is only one sudo configuration on a
  machine and it's owned by root.

 Hmm? At least on ubuntu there is a default /etc/sudoers.d directory,
 where we could land per-service files like nova-compute.conf,
 nova-network.conf, etc. I don't think that's there by default on Fedora
 or RHEL, but adding the includedir to the base config works as expected.

  The end result was that the sudoers file were not maintained and
  everyone ran and tested with a convenient blanket-permission sudoers
  file.

 Last I checked, The nova rootwrap policy includes blanket approvals for
 things like chmod, which pretty much eliminates any sort of expectation
 of reasonable security without improvement by the operator (which I
 think is unrealistic).

 I'm not sure what the right answer is here. I'm a little afraid of a
 rootwrap daemon. However, nova-network choking on 50 instances seems to
 be obviously not an option...


I agree.  The good news is that neutron does not timeout like nova-network
does here, although it makes many rootwrapped calls so it will get a
performance boost from a faster rootwrap solution.

It sounds like rootwrap isn't going anywhere in Havana, and we can explore
faster and more secure solutions for Icehouse.  But there may be some short
term solutions to make nova (with nova-network) not choke on 50 instances.
I would like to be able to say Havana, in its default config, won't choke
when trying to spawn a small number of instances.   Some possible solutions
are:

* Make rootwrap faster --  rootwrapped calls to iptables-save are still 3
to 4x slower then without rootwrap.  but the python load time counts for
less then half of that.
* Finer grained locks, right now it looks like the iptables lock is what is
killing us, so we may be able to find a better way to use iptables-save and
restore.
* Reduce the number of rootwrapped calls when possible, I would be very
surprised if every single rootwrapped call is needed.
* See how neutron does it, it seems to work much better for them
* ???



 --Dan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev