Re: [openstack-dev] [Glance] Regarding Glance's behaviour when updating an image ...

2013-11-25 Thread Flavio Percoco

On 25/11/13 03:23 +, David koo wrote:

Hi All,

   Newbie stacker here ...

   I have a basic question regarding the indended behaviour of Glance's
image update API: What is the indended behaviour of Glance when updating
an already uploaded image file?

   The functional test indicates that the intended behaviour is to
disallow such updates:
   glance/tests/v2/test_images.py:test_image_lifecycle:210
   # Uploading duplicate data should be rejected with a 409
   ...

   When I configure Glance to use the local filesystem backend I do
indeed get a 409 conflict but when I configure Glance to use Swift as
the backend the operation succeeds and the original image file is
replaced.

   On a related note, when using the local filesystem backend though I
get a 409 conflict, it leaves the image in the saving state - I think
it shouldn't change the state of the image. There's a bug logged
regarding this behaviour (bug 1241379) and I'm working on the fix. But
in light of the above question perhaps I should file another bug
regarding the Swift storage backend?



As you mentioned, there seems to be a bug in the swift backend. Images
are immutable.

Thanks for raising this.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgpjHQZsAZBI_.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Marconi][Oslo] Discoverable home document for APIs (Was: Re: [Nova][Glance] Support of v1 and v2 glance APIs in Nova)

2013-11-25 Thread Flavio Percoco

On 25/11/13 09:28 +1000, Jamie Lennox wrote:

So the way we have this in keystone at least is that querying GET / will
return all available API versions and querying /v2.0 for example is a
similar result with just the v2 endpoint. So you can hard pin a version
by using the versioned URL.

I spoke to somebody the other day about the discovery process in
services. The long term goal should be that the service catalog contains
unversioned endpoints and that all clients should do discovery. For
keystone the review has been underway for a while now:
https://review.openstack.org/#/c/38414/ the basics of this should be
able to be moved into OSLO for other projects if required.


Did you guys create your own 'home document' language? or did you base
it on some existing format? Is it documented somewhere? IIRC, there's
a thread where part of this was discussed, it was related to horizon.

I'm curious to know what you guys did and if you knew about
JSON-Home[0] when you started working on this.

We used json-home for Marconi v1 and we'd want the client to work in a
'follow your nose' way. Since, I'd prefer OpenStack modules to use the
same language for this, I'm curious to know why - if so - you
created your own spec, what are the benefits and if it's documented
somewhere.

Cheers,
FF

[0] http://tools.ietf.org/html/draft-nottingham-json-home-02

--
@flaper87
Flavio Percoco


pgpSSZY2EwSLO.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-25 Thread Sylvain Bauza
As said earlier, I also would love to join the team, triggering a few 
blueprints or so.


By the way, I'm currently reviewing the Scheduler code. Do you began to 
design the API queries or do you need help for that ?


-Sylvain


Le 25/11/2013 08:24, Haefliger, Juerg a écrit :


Hi Robert,

I see you have enough volunteers. You can put me on the backup list in 
case somebody drops out or you need additional bodies.


Regards

...Juerg

*From:*Boris Pavlovic [mailto:bpavlo...@mirantis.com]
*Sent:* Sunday, November 24, 2013 8:09 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for 
a modest proposal for an external scheduler in our lifetime


Robert,

Btw,  I would like to be a volunteer too=)

Best regards,

Boris Pavlovic

On Sun, Nov 24, 2013 at 10:43 PM, Robert Collins 
robe...@robertcollins.net mailto:robe...@robertcollins.net wrote:


On 22 November 2013 23:55, Gary Kotton gkot...@vmware.com 
mailto:gkot...@vmware.com wrote:




 I'm looking for 4-5 folk who have:
  - modest Nova skills
  - time to follow a fairly mechanical (but careful and detailed work
 needed) plan to break the status quo around scheduler extraction

 I would be happy to take part. But prior I think that we need to 
iron out

 a number of issues:

Cool! Added your name to the list of volunteers, which brings us to 4,
the minimum I wanted before starting things happening.


 1. Will this be a new service that has an API, for example will Nova be

 able to register a host and provide the host statistics.

This will be an RPC api initially, because we know the performance
characteristics of the current RPC API, and doing anything different
to that is unnecessary risk. Once the new structure is:
* stable
* gated with unit and tempest tests
* with a straightforward and well documented migration path for deployers

Then adding a RESTful API could take place.


 2. How will the various components interact with the scheduler - same as
 today - that is RPC? Or a REST API? The latter is a real concern due to
 problems we have seen with the interactions of nova and other services

RPC initially. REST *only* once we've avoided second system syndrome.


 3. How will current developments fit into this model?

Code sync - take a forklift copy of the code, and apply patches to
both for the one cycle.


 All in all I think that it is a very good and healthy idea. I have a
 number of reservations - these are mainly regarding the 
implementation and

 the service definition.

 Basically I like the approach of just getting heads down and doing 
it, but
 prior to that I think that we just need to understand the scope and 
mainly
 define the interfaces and how they can used/abused and consumed. It 
may be
 a very good topic to discuss at the up and coming scheduler meeting 
- this

 may be in the middle of the night for Robert. If so then maybe we can
 schedule another time.

Tuesdays at 1500 UTC - I'm in UTC+13 at the moment, so thats 0400
local. A ltle early for me :) I'll ping you on IRC about resolving
the concerns you raise, and you can proxy my answers to the sub group
meeting?


 Please note that this is scheduling and not orchestration. That is also
 something that we need to resolve.

Yup, sure is.

-Rob


--
Robert Collins rbtcoll...@hp.com mailto:rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Remove ratelimits from nova v3 api

2013-11-25 Thread Alex Xu

Hi, guys,

Looks like ratelimits is not really useful. The reason already pointed 
out by this patch:

https://review.openstack.org/#/c/34821/ , Thanks Joe for point it out.

So v3 API is a chance to cleanup those stuff. If there isn't anyone 
object it. I will send patch

to get ride of ratelimits code for v3 API.

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-25 Thread Flavio Percoco

On 25/11/13 09:34 +0800, Zhongyue Luo wrote:

Just a thought but couldn't the changes of a module update be calculated by
comparing the last commit dates of the source and target module?

For instance, if module A's update patch for Nova was uploaded on date XX then
we can filter out the changes from XX~present and print it out for the author
to paste in the commit message when running update.py for module A.

This way we might not need any changes to the openstack-modules.conf?


Not sure that would work. The update date of module A in project X
doesn't mean the project is at its latest version. I could've updated
an older version because the latest would've broken project X.

Cheers,
FF



On Sun, Nov 24, 2013 at 12:54 AM, Doug Hellmann doug.hellm...@dreamhost.com
wrote:

   Thanks for the reminder, Sandy.

   https://bugs.launchpad.net/oslo/+bug/1254300


   On Sat, Nov 23, 2013 at 9:39 AM, Sandy Walsh sandy.wa...@rackspace.com
   wrote:

   Seeing this thread reminded me:

   We need support in the update script for entry points in olso setup.cfg
   to make their way into the target project.

   So, if update is getting some love, please keep that in mind.
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Intel SSG/STO/DCST/CIT
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai, China
+862161166500



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpxpMTb75cQH.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-25 Thread Flavio Percoco

On 23/11/13 11:54 -0500, Doug Hellmann wrote:

Thanks for the reminder, Sandy.

https://bugs.launchpad.net/oslo/+bug/1254300


On Sat, Nov 23, 2013 at 9:39 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:

   Seeing this thread reminded me:

   We need support in the update script for entry points in olso setup.cfg to
   make their way into the target project.

   So, if update is getting some love, please keep that in mind.



Good point, +1

I created this blueprint[0] and assigned it to me. I'll give some love
to update.py and get it done for I-2.

Cheers,
FF

[0] https://blueprints.launchpad.net/oslo/+spec/improve-update-script

--
@flaper87
Flavio Percoco


pgp610yEgnGwn.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tenant or project

2013-11-25 Thread Flavio Percoco

On 24/11/13 12:47 -0500, Doug Hellmann wrote:




On Sun, Nov 24, 2013 at 12:08 AM, Morgan Fainberg m...@metacloud.com wrote:

   In all honesty it doesn't matter which term we go with.  As long as we are
   consistent and define the meaning.  I think we can argue intuitive vs
   non-intuitive in this case unto the ground.  I prefer project to tenant,
   but beyond being a bit of an overloaded term, I really don't think anyone
   will really notice one way or another as long as everything is using the
   same terminology.  We could call it grouping-of-openstack-things if we
   wanted to (though I might have to pull some hair out if we go to that
   terminology). 
  
   However, with all that in mind, we have made the choice to move toward

   project (horizon, keystone, OSC, keystoneclient) and have some momentum
   behind that push (plus newer projects already use the project
   nomenclature).   Making a change back to tenant might prove a worse UX than
   moving everything else in line (nova I think is the one real major hurdle
   to get converted over, and deprecation of keystone v2 API). 



FWIW, ceilometer also uses project in our API (although some of our docs use
the terms interchangeably).


And, FWIW, Marconi uses project as well.

FF

--
@flaper87
Flavio Percoco


pgpn4defnwUMD.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Robert Collins
On 25 November 2013 22:23, Clint Byrum cl...@fewbar.com wrote:

 I do wonder if we would be able to commit enough resources to just run
 two copies of the gate in parallel each time and require both to pass.
 Doubling the odds* that we will catch an intermittent failure seems like
 something that might be worth doubling the compute resources used by
 the gate.

 *I suck at math. Probably isn't doubling the odds. Sounds
 good though. ;)

We already run the code paths that were breaking 8 or more times.
Hundreds of times in fact for some :(.

The odds of a broken path triggering after it gets through, assuming
each time we exercise it is equally likely to show it, are roughly
3/times-exercised-in-landing. E.g. if we run a code path 300 times and
it doesn't show up, then it's quite possible that it has a 1%
incidence rate.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] adding jshint to qunit tests (was: Javascript development improvement)

2013-11-25 Thread Radomir Dopieralski
On 22/11/13 16:08, Imre Farkas wrote:
 There's a jslint fork called jshint which is able to run in the browser
 without any node.js dependency.
 
 I created a POC patch [1] long time ago to demonstrate its capabilities.
 It's integrated with qunit and runs automatically with the horizon test
 suite.
 
 The patch also contains a .jshintrc file for the node.js package but you
 can remove it since it's not used by the qunit+jshint test at all.

This is excellent, just as I wanted to raise the issue of inconsistent
style in js in Horizon. Running jshint as part of the qunit tests is a
great solution that I didn't even think about. I think we should
definitely use that.

For people who don't have everyday contact with JavaScript, jshint is
not just a style checker similar to pep8 or flake8.  JavaScript is a
language that wasn't fortunate enough to have its syntax designed and
tested for a long time by a crowd of very intelligent and careful
people, like Python's. It's full of traps, things that look very
innocent on the surface and seem to work without any errors, but turn
out to be horribly wrong -- and all because of a missing semicolon or an
extra comma or even a newline in a wrong spot. Jshint catches at least
the most common mistakes like that, and I honestly can't imagine writing
JavaScript without having it enabled in my editor. We definitely want to
use it sooner or later, and preferably sooner.

Whether we need node.js for it or not is a technical issue -- as Imre
demonstrated here, this one can be solved without node.js, so there is
really nothing stopping us from adopting it.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] future fate of nova-network?

2013-11-25 Thread Daniel P. Berrange
On Sun, Nov 24, 2013 at 11:10:23AM +1300, Robert Collins wrote:
 On 23 November 2013 05:32, Daniel P. Berrange berra...@redhat.com wrote:
  On Fri, Nov 22, 2013 at 11:24:18AM -0500, Russell Bryant wrote:
  A good example is the current discussion around a new scheduling
  service.  There have been lots of big ideas around this.  Robert Collins
  just started a thread about a proposal to start this project but with a
  very strict scope of being able to replace nova-scheduler, and *nothing*
  more until that's completely done.  I like that approach quite a bit.
 
  I'd suggest something even stronger. If we want to split out code into
  a new project, we should always follow the approach used for cinder.
  ie the existing fully functional code should be pulled out as is, and
  work then progress from there. That ensures we'd always have feature
  parity from the very start. Yes, you might have to then do a large
  amount of refactoring to get to where you want to be, but IMHO that's
  preferrable to starting something from scratch and forgetting to cover
  existing use cases.
 
 That is precisely what I'm suggesting. Forklift the code to another
 place and put in place in nova enough glue to talk to it. Complete
 parity, no surprises on performance or anything else. Then start
 evolving.
 
 Anything else is just risk for no benefit.

Great, that sounds like the exact right way todo things.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] who wants to own docker bug triage?

2013-11-25 Thread Thierry Carrez
Matt Riedemann wrote:
 On Saturday, November 23, 2013 3:28:28 PM, Robert Collins wrote:
 Cool; also, if it's not, we should add that as an official tag so that
 it type-completes in LP.

 Good idea.  I don't know how to do that though.  Any guides I can follow
 to make that happen?

I added it. FTR if you're a nova-driver you can visit:
https://bugs.launchpad.net/nova/+manage-official-tags
to add official tags for Nova.

but then you also should update in parallel:
https://wiki.openstack.org/wiki/Bug_Tags

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] future fate of nova-network?

2013-11-25 Thread laserjetyang
I've moved my cloud neutron already, but while it provides many advanced
features it still really falls down on providing simple solutions for
simple use cases
it is my feeling as well.
I am not able to easily achieve my running neutron smoothly, and there are
a lot of tricks. Compare to nova-network, it is getting more and more
advance features, which is good, but it looks to me the code is heavier and
even harder to get it stable.


On Sat, Nov 23, 2013 at 12:17 AM, Jonathan Proulx j...@jonproulx.com wrote:

 To add to the screams of others removing features from nova-network to
 achieve parity with neutron is a non starter, and it rather scares me
 to hear it suggested.

 I do try not to rant in public, especially about things I'm not
 competent to really help fix, but I can't really contain this one any
 longer:

 rant
 As an operator I've moved my cloud neutron already, but while it
 provides many advanced features it still really falls down on
 providing simple solutions for simple use cases.  Most operators I've
 talked to informally hate it for that and don't want to go near it and
 for new users, even those with advanced skill sets, neutron causes by
 far the most cursing and rage quits I've seen (again just my
 subjective observation) on IRC, Twitter, and the mailing lists.

 Providing feature parity and easy cut over *should have been* priority
 1 when quantum split out of nova as it was for cinder (which was a
 delightful and completely unnoticable transition)

 We need feature parity and complexity parity with nova-network for the
 use cases it covers.  The failure to do so or even have a reasonable
 plan to do so is currently the worst thing about openstack.
 /rant

 I do appreciate the work being done on advanced networking features in
 neutron, I'm even using some of them, just someone please bring focus
 back on the basics.

 -Jon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Today's meeting minutes

2013-11-25 Thread Sylvain Bauza

Hi,

You can find our today's meeting minutes here :
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-11-25-09.59.html

Thanks,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient: uses deprecated keyring.backend.$keyring

2013-11-25 Thread Thomas Goirand
On 11/25/2013 03:57 AM, Morgan Fainberg wrote:
 Hi Thomas,
 
 How pressing is this issue?

It seems it's blocking some python package transition from Sid to
Jessie, and impacting the Gnome guys, so it's quite annoying for Debian.
So much that some wanted to NMU the novaclient package. At least, that's
my understanding. If a solution could be found in the next following
days/weeks, that'd be nice!

 I know there is work being done to unify
 token/auth implementation across the clients.  I want to have an idea of
 the heat here so we can look at addressing this directly in novaclient
 if it can't wait until the unification work to come down the line. 

IMO, it'd be best to act now, and go back to it later on when the
unification thingy is done.

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-25 Thread Rafał Jaworowski
On Fri, Nov 22, 2013 at 4:46 PM, Russell Bryant rbry...@redhat.com wrote:
 On 11/22/2013 10:43 AM, Rafał Jaworowski wrote:
 Russell,
 First, thank you for the whiteboard input regarding the blueprint for
 FreeBSD hypervisor nova driver:
 https://blueprints.launchpad.net/nova/+spec/freebsd-compute-node

 We were considering libvirt support for bhyve hypervisor as well, only
 wouldn't want to do this as the first approach for FreeBSD+OpenStack
 integration. We'd rather bring bhyve bindings for libvirt later as
 another integration option.

 For FreeBSD host support a native hypervisor driver is important and
 desired long-term and we would like to have it anyways. Among things
 to consider are the following:
 - libvirt package is additional (non-OpenStack), external dependency
 (maintained in the 'ports' collection, not included in base system),
 while native API (via libvmmapi.so library) is integral part of the
 base system.
 - libvirt license is LGPL, which might be an important aspect for some users.

 That's perfectly fine if you want to go that route as a first step.
 However, that doesn't mean it's appropriate for merging into Nova.
 Unless there are strong technical justifications for why this approach
 should be taken, I would probably turn down this driver until you were
 able to go the libvirt route.

So just to clarify: the native driver for another hypervisor (bhyve)
would not be accepted into Nova even if it met testing coverage
criteria? As I said the libvirt route is an option we consider, but we
would like to have the possibility of a native FreeBSD api integration
as well, similar to what you can have for non-libvirt hypervisor apis
available already in Nova.

Rafal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove ratelimits from nova v3 api

2013-11-25 Thread David Hill
Hi Alex,

   Is removing the ratelimiting code part of the roadmap?  I fear that if it's 
removed now, it may have to be re-implemented in the future as the cloud 
deployments become more mainstream ... Without ratelimit, some evil users could 
DOS Openstack !

Dave



From: Alex Xu [x...@linux.vnet.ibm.com]
Sent: November 25, 2013 03:53
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova] Remove ratelimits from nova v3 api

Hi, guys,

Looks like ratelimits is not really useful. The reason already pointed out by 
this patch:
https://review.openstack.org/#/c/34821/ , Thanks Joe for point it out.

So v3 API is a chance to cleanup those stuff. If there isn't anyone object it. 
I will send patch
to get ride of ratelimits code for v3 API.

Thanks
Alex

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Davanum Srinivas
Many thanks to everyone who helped with the many fixes. Kudos to
Joe/Clark for spear heading the effort!

-- dims

On Mon, Nov 25, 2013 at 12:00 AM, Joe Gordon joe.gord...@gmail.com wrote:
 Hi All,

 TL;DR Last week the gate got wedged on nondeterministic failures. Unwedging
 the gate required drastic actions to fix bugs.

 Starting on November 15th, gate jobs have been getting progressively less
 stable with not enough attention given to fixing the issues, until we got to
 the point where the gate was almost fully wedged.  No one bug caused this,
 it was a collection of bugs that got us here. The gate protects us from code
 that fails 100% of the time, but if a patch fails 10% of the time it can
 slip through.  Add a few of these bugs together and we get the gate to a
 point where the gate is fully wedged and fixing it without circumventing the
 gate (something we never want to do) is very hard.  It took just 2 new
 nondeterministic bugs to take us from a gate that mostly worked, to a gate
 that was almost fully wedged.  Last week we found out Jeremy Stanley (fungi)
 was right when he said, nondeterministic failures breed more
 nondeterministic failures, because people are so used to having to reverify
 their patches to get them to merge that they are doing so even when it's
 their patch which is introducing a nondeterministic bug.

 Side note: This is not the first time we wedge the gate, the first time was
 around September 26th, right when we were cutting Havana release candidates.
 In response we wrote elastic-recheck
 (http://status.openstack.org/elastic-recheck/) to better track what bugs we
 were seeing.

 Gate stability according to Graphite: http://paste.openstack.org/show/53765/
 (they are huge because they encode entire queries, so including as a
 pastebin).

 After sending out an email to ask for help fixing the top known gate bugs
 (http://lists.openstack.org/pipermail/openstack-dev/2013-November/019826.html),
 we had a few possible fixes. But with the gate wedged, the merge queue was
 145 patches  long and could take days to be processed. In the worst case,
 none of the patches merging, it would take about 1 hour per patch. So on
 November 20th we asked for a freeze on any non-critical bug fixes (
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019941.html
 ), and kicked everything out of the merge queue and put our possible bug
 fixes at the front. Even with these drastic measures it still took 26 hours
 to finally unwedge the gate. In 26 hours we got the check queue failure rate
 (always higher then the gate failure rate) down from around 87% failure to
 below 10% failure. And we still have many more bugs to track down and fix in
 order to improve gate stability.


 8 Major bug fixes later, we have the gate back to a reasonable failure rate.
 But how did things get so bad? I'm glad you asked, here is a blow by blow
 account.

 The gate has not been completely stable for a very long time, and it only
 took two new bugs to wedge the gate. Starting with the list of bugs we
 identified via elastic-recheck, we fixed 4 bugs that have been in the gate
 for a few weeks already.


  https://bugs.launchpad.net/bugs/1224001 test_network_basic_ops fails
 waiting for network to become available

 https://review.openstack.org/57290 was the fix which depended on
 https://review.openstack.org/53188 and https://review.openstack.org/57475.

 This fixed a race condition where the IP address from DHCP was not received
 by the VM at the right time. Minimize polling on the agent is now defaulted
 to True, which should reduce the time needed for configuring an interface on
 br-int consistently.

 https://bugs.launchpad.net/bugs/1252514 Swift returning errors when setup
 using devstack

 Fix https://review.openstack.org/#/c/57373/

 There were a few swift related problems that were sorted out as well. Most
 had to do with tuning swift properly for its use as a glance backend in the
 gate, ensuring that timeout values were appropriate for the devstack test
 slaves (in

 resource constrained environments, the swift default timeouts could be
 tripped frequently (logs showed the request would have finished successfully
 given enough time)). Swift also had a race-condition in how it constructed
 its sqlite3

 files for containers and accounts, where it was not retrying operations when
 the database was locked.

 https://bugs.launchpad.net/swift/+bug/1243973 Simultaneous PUT requests for
 the same account...

 Fix https://review.openstack.org/#/c/57019/

 This was not on our original list of bugs, but while in bug fix mode, we got
 this one fixed as well

 https://bugs.launchpad.net/bugs/1251784 nova+neutron scheduling error:
 Connection to neutron failed: Maximum attempts reached

 Fix https://review.openstack.org/#/c/57509/

 Uncovered on mailing list
 (http://lists.openstack.org/pipermail/openstack-dev/2013-November/019906.html)

 Nova had a very old version of oslo's local.py which is used for 

[openstack-dev] [Tempest] Drop python 2.6 support

2013-11-25 Thread Zhi Kun Liu
Hi all,

I saw that Tempest will drop python 2.6 support in design summit
https://etherpad.openstack.org/p/icehouse-summit-qa-parallel.

Drop tempest python 2.6 support:Remove all nose hacks in the codeDelete
nose, use unittest2 with testr/testtools and everything *should* just work
(tm)


Does that mean Tempest could not run on python 2.6 in the future?

-- 
Regards,
Zhi Kun Liu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] future fate of nova-network?

2013-11-25 Thread Sean Dague
On 11/23/2013 10:44 AM, Gary Kotton wrote:
 
 
 On 11/22/13 2:47 PM, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Fri, Nov 22, 2013 at 11:25:51AM +, John Garbutt wrote:
 On Fri, Nov 15, 2013 at 2:21 PM, Russell Bryant rbry...@redhat.com
 wrote:
 In particular, has there been a decision made about whether it will
 definitely be deprecated in some (as yet unspecified) future
 release, or
 whether it will continue to be supported for the foreseeable
 future?

 We want to deprecate it.  There are some things blocking moving
 forward
 with this.  In short:

 1) Feature parity (primarily something that satisfies performance
 and HA
 requirements addressed by nova-network in multi-host mode)

 2) Testing and quality parity.  The status of Neutron testing in the
 gate is far inferior to the testing done against nova-network.

 I'm personally more worried about #2 than #1 at this point.

 A major issue is that very few people actually stepped up and
 agreed to
 help with #2 at the summit [2].  Only one person signed up to work
 on
 tempest issues.  Nobody signed up to help with grenade.  If this
 doesn't
 happen, nova-network can't be deprecated, IMO.

 If significant progress isn't made ASAP this cycle, and ideally by
 mid-cycle so we can change directions if necessary, then we'll have
 to
 discuss what next step to take.  That may include un-freezing
 nova-network so that various people holding on to enhancements to
 nova-network can start submitting them back.  It's a last resort,
 but I
 consider it on the table.

 Another approach to help with (1) is in Icehouse we remove the
 features from nova-network that neutron does not implement. We have
 warned about deprecation for a good few releases, so its almost OK.

 We deprecated it on the basis that users would be able to do a upgrade
 to new release providing something that was feature equivalent.

 We didn't deprecate it on the basis that we were going to remove the
 feature and provide no upgrade path, leaving users screwed.

 So I don't consider removing features from nova-network to be an
 acceptable approach, until they exist in Neutron or something else
 that users can upgrade their existing deployments to.
 
 As far as I see it the only thing that is missing for parity today is the
 HA for the floating IP's. A partial solution was added and this was
 dropped due to disagreement in the Neutron community. What I think that a
 lot of people miss is that the HA floating IP support in Nova is not
 really scalable and that is where the Neutron solution fills a void.

So this is one of the underlying issues. Even if the current Nova
solution has issues... it is the model of lots of deployments. So
Neutron should mirror that mode semantically exactly so that a
transition can take place.

After that, it can be made better. But we do keep running into places
where *better* is chosen over compatible, which is an underlying reason
for the the continued split between nova-network and neutron.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] EC2 Filter

2013-11-25 Thread Sebastian Porombka
Hi Folks.

I stumbled over the lacking of EC2 Api filter support on Openstack and
justinsb¹s (and the other people I forgot to mention here) attempt [1] to
implement this against diabolo.

I¹m highly interested in this feature and would like (to try) to implement
this features against the latest base. My problem is, that I¹m unable find
informations about the rejection of this set of patches in 2011.

Can anybody help me?

Greetings
  Sebastian

[1] https://code.launchpad.net/~justin-fathomdb/nova/ec2-filters
--
Sebastian Porombka, M.Sc.
Zentrum für Informations- und Medientechnologien (IMT)
Universität Paderborn

E-Mail: porom...@uni-paderborn.de
Tel.: 05251/60-5999
Fax: 05251/60-48-5999
Raum: N5.314 


Q: Why is this email five sentences or less?
A: http://five.sentenc.es

Please consider the environment before printing this email.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][Neutron] Handling of ovs command errors

2013-11-25 Thread Salvatore Orlando
Hi,

I've been recently debugging some issues I've had with the OVS agent, and I
found out that in many  cases (possibly every case) the code just logs
errors from ovs-vsctl and ovs-ofctl without taking any action in the
control flow.

For instance, the routine which should do the wiring for a port, port_bound
[1], does not react in any way if it fails to configure the local vlan,
which I guess means the port would not be able to send/receive any data.

I'm pretty sure there's a good reason for this which I'm missing at the
moment. I am asking because I see a pretty large number of ALARM_CLOCK
errors returned by OVS commands in gate logs (see bug [2]), and I'm not
sure whether it's ok to handle them as the OVS agent is doing nowadays.

Regards,
Salvatore

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#L599
[2] https://bugs.launchpad.net/neutron/+bug/1254520
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-25 Thread Dan Smith
 So just to clarify: the native driver for another hypervisor (bhyve)
 would not be accepted into Nova even if it met testing coverage
 criteria? As I said the libvirt route is an option we consider, but we
 would like to have the possibility of a native FreeBSD api integration
 as well, similar to what you can have for non-libvirt hypervisor apis
 available already in Nova.

I'd _really_ prefer to see a libvirt-based approach over a native one.
In the presence of one, I would hope we would not need to take on the
maintenance burdon of a native driver. In the absence, I'd want to know why.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-25 Thread Daniel P. Berrange
On Fri, Nov 22, 2013 at 10:46:19AM -0500, Russell Bryant wrote:
 On 11/22/2013 10:43 AM, Rafał Jaworowski wrote:
  Russell,
  First, thank you for the whiteboard input regarding the blueprint for
  FreeBSD hypervisor nova driver:
  https://blueprints.launchpad.net/nova/+spec/freebsd-compute-node
  
  We were considering libvirt support for bhyve hypervisor as well, only
  wouldn't want to do this as the first approach for FreeBSD+OpenStack
  integration. We'd rather bring bhyve bindings for libvirt later as
  another integration option.
  
  For FreeBSD host support a native hypervisor driver is important and
  desired long-term and we would like to have it anyways. Among things
  to consider are the following:
  - libvirt package is additional (non-OpenStack), external dependency
  (maintained in the 'ports' collection, not included in base system),
  while native API (via libvmmapi.so library) is integral part of the
  base system.
  - libvirt license is LGPL, which might be an important aspect for some 
  users.
 
 That's perfectly fine if you want to go that route as a first step.
 However, that doesn't mean it's appropriate for merging into Nova.
 Unless there are strong technical justifications for why this approach
 should be taken, I would probably turn down this driver until you were
 able to go the libvirt route.

The idea of a FreeBSD bhyve driver for libvirt has been mentioned
a few times. We've already got a FreeBSD port of libvirt being
actively maintained to support QEMU (and possibly Xen, not 100% sure
on that one), and we'd be more than happy to see further contributions
such as a bhyve driver.

I am of course biased, as libvirt project maintainer, but I do agree
that supporting bhyve via libvirt would make sense, since it opens up
opportunities beyond just OpenStack. There are a bunch of applications
built on libvirt that could be used to manage bhyve, and a fair few
applications which have plugins using libvirt

Taking on maint work for a new OpenStack driver is a non-trivial amount
of work in itself. If the burden for OpenStack maintainers can be reduced
by, pushing work out to / relying on support from, libvirt, that makes
sense from OpenStack/Nova's POV.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Monty Taylor


On 11/25/2013 04:23 AM, Clint Byrum wrote:
 Excerpts from Joe Gordon's message of 2013-11-24 21:00:58 -0800:
 Hi All,

 TL;DR Last week the gate got wedged on nondeterministic failures. Unwedging
 the gate required drastic actions to fix bugs.


 snip
 
 (great write-up, thank you for the details, and thank you for fixing
 it!)
 

 Now that we have the gate back into working order, we are working on the
 next steps to prevent this from happening again.  The two most immediate
 changes are:

- Doing a better job of triaging gate bugs  (

 http://lists.openstack.org/pipermail/openstack-dev/2013-November/020048.html
 ).


- In the next few days we will remove  'reverify no bug' (although you
will still be able to run 'reverify bug x'.

 
 I am curious, why not also disable 'recheck no bug'?

recheck no bug still has a host of valid use cases. Often times I use it
when I upload a patch, it fails because of a thing somewhere else, we
fix that, and I need to recheck the patch because it should work now.

It's also not nearly as dangerous as reverify no bug.

 I see this as a failure of bug triage. A bug that has more than 1
 recheck/reverify attached to it is worth a developer's time. The data
 gathered through so many test runs is invaluable when chasing races like
 the ones that cause these intermittent failures. If every core dev of
 every project spent 10 working minutes every day looking at the rechecks
 page to see if there is an untriaged recheck there, or just triaging bugs
 in general, I suspect we'd fix these a lot quicker.
 
 I do wonder if we would be able to commit enough resources to just run
 two copies of the gate in parallel each time and require both to pass.
 Doubling the odds* that we will catch an intermittent failure seems like
 something that might be worth doubling the compute resources used by
 the gate.

Funny story- there is a patch coming to do just that.

 *I suck at math. Probably isn't doubling the odds. Sounds
 good though. ;)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Handling of ovs command errors

2013-11-25 Thread Kyle Mestery (kmestery)
On Nov 25, 2013, at 8:28 AM, Salvatore Orlando sorla...@nicira.com wrote:
 
 Hi,
 
 I've been recently debugging some issues I've had with the OVS agent, and I 
 found out that in many  cases (possibly every case) the code just logs errors 
 from ovs-vsctl and ovs-ofctl without taking any action in the control flow.
 
 For instance, the routine which should do the wiring for a port, port_bound 
 [1], does not react in any way if it fails to configure the local vlan, which 
 I guess means the port would not be able to send/receive any data.
 
 I'm pretty sure there's a good reason for this which I'm missing at the 
 moment. I am asking because I see a pretty large number of ALARM_CLOCK errors 
 returned by OVS commands in gate logs (see bug [2]), and I'm not sure whether 
 it's ok to handle them as the OVS agent is doing nowadays.
 
Thanks for bringing this up Salvatore. It looks like the underlying run_vstcl 
[1] provides an ability to raise exceptions on errors, but this is not used by 
most of the callers of run_vsctl. Do you think we should be returning the 
exceptions back up the stack to callers to handle? I think that may be a good 
first step.

Thanks,
Kyle

[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ovs_lib.py#L52

 Regards,
 Salvatore
 
 [1] 
 https://github.com/openstack/neutron/blob/master/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#L599
 [2] https://bugs.launchpad.net/neutron/+bug/1254520
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Icehouse mid-cycle meetup

2013-11-25 Thread Russell Bryant
Greetings,

Other groups have started doing mid-cycle meetups with success.  I've
received significant interest in having one for Nova.  I'm now excited
to announce some details.

We will be holding a mid-cycle meetup for the compute program from
February 10-12, 2014, in Orem, UT.  Huge thanks to Bluehost for hosting us!

Details are being posted to the event wiki page [1].  If you plan to
attend, please register.  Hotel recommendations with booking links will
be posted soon.

Please let me know if you have any questions.

Thanks,

[1] https://wiki.openstack.org/wiki/Nova/IcehouseCycleMeetup
-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] remote debugging

2013-11-25 Thread Tracy Jones
Hi Folks - i am trying to add a patch to enable remote debugging in the nova 
services.  I can make this work very simply, but it requires a change to 
monkey_patching - i.e.

eventlet.monkey_patch(os=False, select=True, socket=True, thread=False,
  time=True, psycopg=True)

I’m working with some folks from the debugger vendor (pycharm) on why this is 
needed.   However - i’ve been using it with nova-compute for a month or so and 
do not see any issues which changing the monkey-patching.  Since this is done 
only when someone wants to use the debugger - is making this change so bad?

 https://review.openstack.org/56287___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-25 Thread wanghaomeng


Hello all:


Basically, I understand the solution is - Our Ironic will implement an IPMI 
driver(extendable framework for more drivers) to collect hardware sensor 
data(cpu temp, fan speed, volts, etc) via IPMI protocol from hardware server 
node, and emit the AMQP message to Ceilometer Collector, Ceilometer have the 
framework to handle the valid sample message and save to the database for data 
retrieving by consumer.


Now, how do you think if we should clearly define the interface  data model 
specifications between Ironic and Ceilometer to enable IPMI data collecting, 
then our two team can start the coding together?


And I still have some concern with our interface and data model as below, the 
spec need to be discussed and finalized:


1. What is the Ceilometer sample data mandatory attributes, such as 
instance_id/tenant_id/user_id/resource_id, if they are not  optional, where are 
these data populated, from Ironic or Ceilomter side?
 
  name/type/unit/volume/timestamp - basic sample property, can be populated 
from Ironic side as data source
  user_id/project_id/resource_id - Ironic or Ceilometer populate these fields?? 
 
  resource_metadata - this is for Ceilometer metadata query, Ironic know 
nothing for such resource metadata I think
  source - can we hard-code as 'hardware' as a source identifier?


2. Not sure if our Ceilometer only accept the signed-message, if it is case, 
how Ironic get the message trust for Ceilometer, and send the valid message 
which can be accepted by Ceilometer Collector?


3. What is the Ceilometer sample data structure, and what is the min data item 
set for the IPMI message be emitted to Collector?
  name/type/unit/volume/timestamp/source - is this min data item set?
  
3. If the detailed data model should be defined for our IPMI data now?, what is 
our the first version scope, how many IPMI data type we should support? Here is 
a IPMI data sample list, I think we can support these as a min set.
  Temperature - System Temp/CPU Temp
  FAN Speed in rpm - FAN 1/2/3/4/A
  Volts - Vcore/3.3VCC/12V/VDIMM/5VCC/-12V/VBAT/VSB/AVCC


4. More specs - such as naming conversions, common constant reference 
definitions ...


Just a draft, correct me if I am wrong understanding and add the missing 
aspects, we can discuss these interface and data model clearly with several 
rounds:)




--
Haomeng
Thanks:)





At 2013-11-21 16:08:00,Ladislav Smola lsm...@redhat.com wrote:

Responses inline.

On 11/20/2013 07:14 PM, Devananda van der Veen wrote:

Responses inline.


On Wed, Nov 20, 2013 at 2:19 AM, Ladislav Smola lsm...@redhat.com wrote:
Ok, I'll try to summarize what will be done in the near future for Undercloud 
monitoring.

1. There will be Central agent running on the same host(hosts once the central 
agent horizontal scaling is finished) as Ironic



Ironic is meant to be run with 1 conductor service. By i-2 milestone we should 
be able to do this, and running at least 2 conductors will be recommended. When 
will Ceilometer be able to run with multiple agents?

Here it is described and tracked: 
https://blueprints.launchpad.net/ceilometer/+spec/central-agent-improvement




On a side note, it is a bit confusing to call something a central agent if it 
is meant to be horizontally scaled. The ironic-conductor service has been 
designed to scale out in a similar way to nova-conductor; that is, there may be 
many of them in an AZ. I'm not sure that there is a need for Ceilometer's agent 
to scale in exactly a 1:1 relationship with ironic-conductor?

Yeah we have already talked about that. Maybe some renaming will be in place 
later. :-) I don't think it has to be 1:1 mapping. There was only requirement 
to have Hardware agent only on hosts with ironic-conductor, so it has access 
to management network, right?


 
2. It will have SNMP pollster, SNMP pollster will be able to get list of hosts 
and their IPs from Nova (last time I
checked it was in Nova) so it can poll them for stats. Hosts to poll can be 
also defined statically in config file.



Assuming all the undercloud images have an SNMP daemon baked in, which they 
should, then this is fine. And yes, Nova can give you the IP addresses for 
instances provisioned via Ironic.
 

Yes.


3. It will have IPMI pollster, that will poll Ironic API, getting list of hosts 
and a fixed set of stats (basically everything
that we can get :-))



No -- I thought we just agreed that Ironic will not expose an API for IPMI 
data. You can poll Nova to get a list of instances (that are on bare metal) and 
you can poll Ironic to get a list of nodes (either nodes that have an instance 
associated, or nodes that are unprovisioned) but this will only give you basic 
information about the node (such as the MAC addresses of its network ports, and 
whether it is on/off, etc).

Ok sorry I have misunderstood the:
If there is a fixed set of information (eg, temp, fan speed, etc) that 

Re: [openstack-dev] [nova] remote debugging

2013-11-25 Thread Russell Bryant
On 11/25/2013 10:28 AM, Tracy Jones wrote:
 Hi Folks - i am trying to add a patch to enable remote debugging in the
 nova services.  I can make this work very simply, but it requires a
 change to monkey_patching - i.e.
 
 eventlet.monkey_patch(os=False, select=True, socket=True, thread=False,
   time=True, psycopg=True)
 
 I’m working with some folks from the debugger vendor (pycharm) on why
 this is needed.   However - i’ve been using it with nova-compute for a
 month or so and do not see any issues which changing the
 monkey-patching.  Since this is done only when someone wants to use the
 debugger - is making this change so bad?
 
  https://review.openstack.org/56287

Last I looked at the review, it wasn't known that thread=False was
specifically what was needed IIRC  That's good progress and is a bit
less surprising than the change before.

I suppose if the options that enable this come with a giant warning,
it'd be OK.  Something like:

  WARNING: Using this option changes how Nova uses the eventlet
  library to support async IO. This could result in failures that do
  not occur under normal opreation. Use at your own risk.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding notifications to Horizon

2013-11-25 Thread Florent Flament
Hi,

I am interested in adding AMQP notifications to the Horizon dashboard,
as described in the following blueprint:
https://blueprints.launchpad.net/horizon/+spec/horizon-notifications

There are currently several implementations in Openstack. While
Nova and Cinder define `notify_about_*` methods that are called
whenever a notification has to be sent, Keystone uses decorators,
which send appropriate notifications when decorated methods are
called.

I fed the blueprint's whiteboard with an implementation proposal,
based on Nova and Cinder implementation. I would be interested in
having your opinion about which method would fit best, and whether
these notifications make sense at all.

Cheers,
Florent Flament

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-11-25 Thread Russell Bryant
On 11/22/2013 09:07 PM, Matt Riedemann wrote:
 
 
 On Friday, November 22, 2013 5:52:17 PM, Russell Bryant wrote:
 On 11/22/2013 06:01 PM, Christopher Yeoh wrote:

 On Sat, Nov 23, 2013 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:

  ...
  21:51:42 dolphm i just hope that the version discovery mechanism
  is smart enough to realize when it's handed a versioned endpoint,
  and happily run with that
  ...
  21:52:00 dolphm (by calling that endpoint and doing proper
 discovery)
  ...
  21:52:24 russellb dolphm: yeah, need to handle that gracefully
 ...



 Just one point here (and perhaps I'm misunderstanding what was meant),
 but if the catalog points to a versioned endpoint
 shouldn't we just use that version rather than trying to discover what
 other versions may be
 available. Although we'll have cases of it just being set to a versioned
 endpoint because thats how it
 has been done in the past I think we should be making the assumption
 that if we're pointed to a specific version,
 that is the one we should be using.

 Agreed, and I think that's what Dolph and I meant.

 
 That also covers the override case that was expressed a few different
 times in this thread, giving the admin the ability to pin his
 environment to the version he knows and trusts during, for example,
 upgrades, and then slowly transitioning to a newer API.  The nice thing
 with that approach is it should keep config options with hard-coded
 versions out of nova.conf which is what was being proposed in the glance
 and cinder v2 blueprint patches.

There may still be value in the config options.  The service catalog is
also exposed to end users, so they may not appreciate having it tweaked
to add and remove a version while working through an upgrade.

If we updated services to always use the internalURL from the service
catalog, then you could only mess with this version pinning on the
internalURL and not the publicURL.

It would be nice if we had a more concrete defition (and a matching
implementation) of when each of these fields from the service catalog is
used.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-25 Thread John Wood
Hello folks,

FWIW, I've created a wiki page here aimed at easing the code transition to 
barbican for the KDS patch: 
https://github.com/cloudkeep/barbican/wiki/Blueprint:-KDS-Service

Please let us know if we can be of further help.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From: Thierry Carrez [thie...@openstack.org]
Sent: Monday, November 25, 2013 4:17 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution 
Server, Trusted Messaging

Adam Young wrote:
 Keep KDS configuration separate from the Keystone configuration: the
 fact that they both point to the same host and port is temporary.  In
 fact, we should probably spin up a separate wsgi service/port inside
 Keystone for just the KDS.  This is not hard to do, and will support
 splitting it off into its own service.

 KDS should not show up in the Service catalog.  It is not an end user
 visible service and should not look like one to the rest of the world.

 Once we have it up and running, we can move it to its own service or
 hand off to Barbican when appropriate.

Right, I think a decent trade-off between availability and avoiding code
duplication would be to have a minimal KDS available as an option in
Keystone, with Barbican/something-else being developed in parallel as
the complex/featureful/configurable option. If Barbican/something-else
reaches feature parity, covers the basic and simple use case and is
integrated, we could deprecate the in-Keystone minimal-KDS option.

I know this plan looks a bit like the nova-network chronicles, but I
think the domain is more simple so feature parity is not as much of a
challenge.

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] JavaScript use policy

2013-11-25 Thread Radomir Dopieralski
Hello everyone,

there has been some talks about this behind the stage for a while, but I
think that we need to have this discussion here at last and make a
deicsion. We need a clear, concrete and enforcable policy about the use
of client-side JavaScript in Horizon. The current guideline it would be
nice if it worked without JavaScript doesn't really cut it, and due to
it not being enforced, it's really uneven across the project.

This is going to be a difficult discussion, and it's likely to get very
emotional. There are different groups of users and developers with
different tasks and needs, and it may be hard to get everyone to agree
on a single option. But I believe it will be beneficial for the project
in the long run to have a clear policy on this.

As I see it, we have basically three options about it. The two extreme
approaches are the simplest: either require everything to work without
JavaScript, or simply refuse to work with JavaScript disabled. The third
option is in reality more like another three hundred options, because it
would basically specify what absolutely has to work without JavaScript,
and what is allowed to break. Personally I have no opinion about what
would be best, but I do have a number of questions that I would like you
to consider:

1. Are the users of Horizon likely to be in a situation, where they need
to use JavaScript-less browser and it's not more convenient to use the
command-line tools?

2. Are there users of Horizon who, for whatever reason (security,
disability, weak hardware), prefer to use a browser with JavaScript
disabled?

3. Designing for the web constrains the designs in certain ways. Are
those constrains hurting us so much? Do you know any examples in Horizon
right now that could be redesigned if we dropped the non-JavaScript
requirements?

4. Some features are not as important as others. Some of them are nice
to have, but not really necessary. Can you think about any parts of the
Horizon user interface that are not really necessary without JavaScript?
Do they have something in common?

5. Some features are absolutely necessary, even if we had to write a
separate fallback view or render some things on the server side to
support them without JavaScript. Can you think of any in Horizon right now?

6. How much more work it is to test if your code works without
JavaScript? Is it easier or harder to debug when something goes wrong?
Is it easier or harder to expand or modify existing code?

7. How would you test if the code conforms to the policy? Do you think
it could be at least partially automated? How could we enforce the
policy better?

8. How much more experience and knowledge is needed to create web pages
that have proper graceful degradation? Is that a common skill? Is that
skill worth having?

9. How much more work are we ready to put into supporting
JavaScript-less browsers? Is that effort that could be spent on
something else, or would it waste anyways?

10. How much do we need real-time updates on the web pages? What do we
replace them with when no js is available -- static and outdated data,
or not display that information at all?

11. If we revisit this policy next year, and decide that JavaScript can
be a requirement after all, how much work would have been wasted?

12. Are we likely to have completely different designs if we require
JavaScript? Is that a good or a bad thing?

13. Can we use any additional libraries, tools or techniques if we
decide to require JavaScript? Are they really useful, are are they just
toys?

14. How do we decide which features absolutely need to work without
JavaScript, and which can break?

15. Should we display a warning to the user when JavaScript is disabled?
Maybe be should do something else? If so, what?

16. If JavaScript is optional, would you create a single view and
enhance it with JavaScript, or would you rather create a separate
fallback view? What are the pros and cons of both approaches? How would
you test them?

17. If we make JavaScript easier to use in Horizon, will it lead to it
being used in unnecessary places and causing bloat? If so, what can be
done to prevent that?

18. Will accessibility be worse? Can it be improved by using good
practices? What would be needed for that to happen?

You don't have to answer those questions -- they are just examples of
the problems that we have to consider. Please think about it. There is
no single best answer, but I'm sure we can create a policy that will
make Horizon better in the long run.

Thank you,
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding notifications to Horizon

2013-11-25 Thread Sandy Walsh
+1 on the inline method. It makes it clear when a notification should be
emitted and, as you say, handles the exception handling better.

Also, if it makes sense for Horizon, consider bracketing long-running
operations in .start/.end pairs. This will help with performance tuning
and early error detection.

More info on well behaved notifications in here:
http://www.sandywalsh.com/2013/09/notification-usage-in-openstack-report.html

Great to see!

-S


On 11/25/2013 11:58 AM, Florent Flament wrote:
 Hi,
 
 I am interested in adding AMQP notifications to the Horizon dashboard,
 as described in the following blueprint:
 https://blueprints.launchpad.net/horizon/+spec/horizon-notifications
 
 There are currently several implementations in Openstack. While
 Nova and Cinder define `notify_about_*` methods that are called
 whenever a notification has to be sent, Keystone uses decorators,
 which send appropriate notifications when decorated methods are
 called.
 
 I fed the blueprint's whiteboard with an implementation proposal,
 based on Nova and Cinder implementation. I would be interested in
 having your opinion about which method would fit best, and whether
 these notifications make sense at all.
 
 Cheers,
 Florent Flament
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Meeting logs from the first IRC meeting

2013-11-25 Thread Collins, Sean (Contractor)
Great! Let's try and schedule a time for an IRC meeting that works for
everyone so we can discuss the patch. I'm happy to do the meeting on an
Asia friendly time, since it's hard to do the scheduled time at
2100UTC, and we'll just post the logs.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-25 Thread Everett Toews
On Nov 21, 2013, at 10:20 AM, Jesse Noller wrote:

 I’ve spoken to Everett and others about discussions had at the summit around 
 ideas like developer.openstack.org - and I think the idea is a good start 
 towards improving the lives of downstream application developers.

Blueprint started by Tom Fifield and assigned to me.

https://blueprints.launchpad.net/openstack-manuals/+spec/developer-openstack-org

Everett



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-11-25 01:30:11 -0800:
 On 25 November 2013 22:23, Clint Byrum cl...@fewbar.com wrote:
 
  I do wonder if we would be able to commit enough resources to just run
  two copies of the gate in parallel each time and require both to pass.
  Doubling the odds* that we will catch an intermittent failure seems like
  something that might be worth doubling the compute resources used by
  the gate.
 
  *I suck at math. Probably isn't doubling the odds. Sounds
  good though. ;)
 
 We already run the code paths that were breaking 8 or more times.
 Hundreds of times in fact for some :(.
 
 The odds of a broken path triggering after it gets through, assuming
 each time we exercise it is equally likely to show it, are roughly
 3/times-exercised-in-landing. E.g. if we run a code path 300 times and
 it doesn't show up, then it's quite possible that it has a 1%
 incidence rate.

We don't run through 300 times of the same circumstances. We may pass
through indidivual code paths that have a race condition 300 times, but
the circumstances are probably only right for failure in 1 or 2 of them.

1% overall then, doesn't matter so much as how often does it fail when
the conditions for failure are optimal. If we can increase the ocurrences
of the most likely failure conditions, then we do have a better chance
of catching the failure.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] New meeting time

2013-11-25 Thread Kurt Griffiths
Hi folks,

To make it easier on our friends joining the project from across the world, I’d 
like to propose moving our weekly meeting time back one hour to 1500 UTC:

http://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0sec=0

Any objections or alternate suggestions?

---
@kgriffs
Kurt Griffiths
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] New meeting time

2013-11-25 Thread Amit Gandhi
Works for me.

Amit.


From: Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, November 25, 2013 at 11:59 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [marconi] New meeting time

Hi folks,

To make it easier on our friends joining the project from across the world, I’d 
like to propose moving our weekly meeting time back one hour to 1500 UTC:

http://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0sec=0

Any objections or alternate suggestions?

---
@kgriffs
Kurt Griffiths
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] remote debugging

2013-11-25 Thread yatin kumbhare
Hi,

http://debugopenstack.blogspot.in/

I have done Openstack remote debugging with eclipse and pydev.

only change is to exclude python thread library from monkey patch at
service start up.

Regards,
Yatin


On Mon, Nov 25, 2013 at 9:10 PM, Russell Bryant rbry...@redhat.com wrote:

 On 11/25/2013 10:28 AM, Tracy Jones wrote:
  Hi Folks - i am trying to add a patch to enable remote debugging in the
  nova services.  I can make this work very simply, but it requires a
  change to monkey_patching - i.e.
 
  eventlet.monkey_patch(os=False, select=True, socket=True,
 thread=False,
time=True, psycopg=True)
 
  I’m working with some folks from the debugger vendor (pycharm) on why
  this is needed.   However - i’ve been using it with nova-compute for a
  month or so and do not see any issues which changing the
  monkey-patching.  Since this is done only when someone wants to use the
  debugger - is making this change so bad?
 
   https://review.openstack.org/56287

 Last I looked at the review, it wasn't known that thread=False was
 specifically what was needed IIRC  That's good progress and is a bit
 less surprising than the change before.

 I suppose if the options that enable this come with a giant warning,
 it'd be OK.  Something like:

   WARNING: Using this option changes how Nova uses the eventlet
   library to support async IO. This could result in failures that do
   not occur under normal opreation. Use at your own risk.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 11/25/2013

2013-11-25 Thread Renat Akhmerov
Hi,

Thanks to everyone who joined the meeting today, here’s the minutes and the 
full log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-11-25-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-11-25-16.00.log.html

Join us next week!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] remote debugging

2013-11-25 Thread Tracy Jones
Thanks Yatin - that is the change I am proposing in my patch


On Nov 25, 2013, at 9:09 AM, yatin kumbhare yatinkumbh...@gmail.com wrote:

 Hi,
 
 http://debugopenstack.blogspot.in/
 
 I have done Openstack remote debugging with eclipse and pydev.
 
 only change is to exclude python thread library from monkey patch at service 
 start up.
 
 Regards,
 Yatin
 
 
 On Mon, Nov 25, 2013 at 9:10 PM, Russell Bryant rbry...@redhat.com wrote:
 On 11/25/2013 10:28 AM, Tracy Jones wrote:
  Hi Folks - i am trying to add a patch to enable remote debugging in the
  nova services.  I can make this work very simply, but it requires a
  change to monkey_patching - i.e.
 
  eventlet.monkey_patch(os=False, select=True, socket=True, thread=False,
time=True, psycopg=True)
 
  I’m working with some folks from the debugger vendor (pycharm) on why
  this is needed.   However - i’ve been using it with nova-compute for a
  month or so and do not see any issues which changing the
  monkey-patching.  Since this is done only when someone wants to use the
  debugger - is making this change so bad?
 
   https://review.openstack.org/56287
 
 Last I looked at the review, it wasn't known that thread=False was
 specifically what was needed IIRC  That's good progress and is a bit
 less surprising than the change before.
 
 I suppose if the options that enable this come with a giant warning,
 it'd be OK.  Something like:
 
   WARNING: Using this option changes how Nova uses the eventlet
   library to support async IO. This could result in failures that do
   not occur under normal opreation. Use at your own risk.
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] git-Integration working group

2013-11-25 Thread Clayton Coleman


- Original Message -
 
 
 On 11/22/2013 09:51 PM, Adrian Otto wrote:
  Monty,
  
  On Nov 22, 2013, at 6:24 PM, Monty Taylor mord...@inaugust.com
  wrote:
  
  On 11/22/2013 05:37 PM, Krishna Raman wrote:
  Hello all,
  
  I would like to kickoff the Git integration discussion. Goal of
  this subgroup is to go through the git-integration blueprint [1]
  and break it up into smaller blueprints that we can execute on.
  
  We have to consider 2 workflows: 1) For Milestone 1, pull based
  git workflow where user uses a public git repository (possibly on
  github) to trigger the build 2) For later milestones, a push
  based workflow where the git repository is maintained by Solum
  
  Hi!
  
  Hi, thanks for chiming in here.
  
  I'm a little disappointed that we've decided to base the initial
  workflow on something that is not related to the world-class
  git-based developer tooling that the OpenStack project has already
  produced. We have a GIANT amount of tooling in this space, and it's
  all quite scalable. There is also the intent by 3 or 4 different
  groups to make it more re-usable/re-consumable, including thoughts
  in making sure that we can drive it from and have it consume heat.
  
  The initial work will be something pretty trivial. It's just a web
  hook on a git push. The workflow in this case is not customizable,
  and has basically no features. The intent is to iterate on this to
  make it much more compelling over time, soon after the minimum
  integration, we will put a real workflow system in place. We did
  discuss Zuul and Nodepool, and nobody had any objection to learning
  more about those. This might be a bit early in our roadmap to be
  pulling them in, but if there is an easy way to use them early in our
  development, we'd like to explore that.
 
 A web hook on a git push seems trivial, but is not. Something has to run
 the git push - and as you said, you're planning that thing to be github.
 That means that, out of the gate, solum will not work without deferring
 to a non-free service. Alternately, you could install one of the other
 git servers, such as gitlab - but now you're engineering that.

I think we should have been clearer here - we didn't plan to make GitHub 
required or specific in any fashion.  In fact, during the discussions we simply 
said URL that could be curl'd during Git post-receive hook - working for 
GitHub/gitorious/arbitrary would be something much later.  Solum's goal was to 
be source control agnostic except for two points:

1) an abstraction that lets Solum take the contents of a single source 
repository and/or binary artifacts and pass that to a mechanism that converts 
them into a deployable image
2) an endpoint (the webhook) that lets an external caller notify Solum of the 
need to do #1 for a known repository / artifact

The #1 is actually not intended to imply Solum is in control of the source 
repository - instead, Solum operates only through pull mechanics, and 
operators/users still get to choose the workflow around each repository.  An 
example might be an OpenStack gate job that runs a deployment of the HEAT 
services through Solum - Zuul does a build, tells Solum to go deploy the 
results of that particular commit or those build artifacts as two isolated 
simple services, then invokes test cases against that deployed result.  Solum 
pulls and deploys based on the Zuul notification and build output, but is not 
in charge of the repository in any way.  The key is that a developer might want 
to deploy their own version of HEAT the same way that Zuul does - they invoke 
Solum and tell it to deploy based on commit X of their own particular 
repository, and Solum pulls and deploys in a very similar fashion.

I think that the ultimate goal of Solum is to abstract how a source/binary 
input is turned into an image that can be deployed and then pass that off to 
HEAT.  There's a lot that can happen under that abstraction - the idea of 
taking a base image and transforming it into an image that can be run as a 
component of a larger service plays to both HEAT and Nova's strengths, without 
unnecessarily constraining existing workflows from being used.  

The workflow at the front end (verifying and testing source) can be arbitrarily 
complex, or organizations can adopt Zuul/Gerrit (which we should definitely 
enable), or simply be a straight pass through (for developers working in test 
environments).  Likewise the workflow inside making a deployable image can be 
arbitrarily complex - simple pip install of a Python source repo, straight up 
to full fledged base images built using diskimagebuilder for OpenStack, but all 
Solum has to care about is setting up the environment and triggering the 
transition, then snapshotting afterwards.

I definitely worry we lack a lot of concrete examples in the wiki that walk 
through how these flows might exist in different organizations / applications / 
environments, and the value that the Solum 

Re: [openstack-dev] [horizon] JavaScript use policy

2013-11-25 Thread Lyle, David
We have been having this discussion here on the mailing list [1][2] and also in 
the Horizon team meetings [3].

The overall consensus has been that we are going to move forward with a 
JavaScript requirement.

Most of the pushback to support non-JavaScript is based on web-accessibility 
standards last updated in ~2007.  This pseudo-requirement stemmed from 
incomplete/varied support of JavaScript in browsers and the lack of support for 
technologies like screen readers for JavaScript. These generally do not hold 
true any longer.  Additionally, no one has responded to any of the mailing list 
items to advocate maintaining the non-JavaScript use cases.

In an effort to create a better user experience for users, we have chosen to 
take an approach to require JavaScript.  This means that key areas of 
functionality can now require JavaScript. Horizon is not just a reference 
implementation, it is a  full working UI for OpenStack.  We want the best user 
experience we can provide. So, we will incorporate JavaScript while maintaining 
accessibility for tools like screen readers.

OpenStack is a community using python for most code, barring some exceptions.  
We are not moving away from that.  This is not a wholesale replacement of the 
server backend to allow an almost entirely client side application.  This is a 
move to add JavaScript in targeted interaction points to improve user 
experience.  Today, JavaScript is already sprinkled throughout the code.  I am 
not entirely sure what use cases still support a non-JavaScript fallback.  So, 
my feeling is that this past soft-requirement is already broken.  And 
supporting a separate code path for non-JavaScript fallbacks without rigorous 
testing is prone to fall apart.  Today we do not have automated testing for the 
non-JavaScript tasks. 

New project teams often contribute the first implementation of the UI for their 
project to Horizon.  Our goal is to still support this path without requiring 
JavaScript expertise, or to write any JavaScript at all.  If later, a user 
experience can be improved with some targeted JavaScript, that work will be 
done by someone that has those skills, and reviewed by individuals that 
understand JavaScript and the associated libraries.

So the discussion before us now is: what tools do we use to best 
write/maintain/test the JavaScript we are adding.  There is pushback against 
node.js by the distros for reasons also documented on this mailing list.  There 
seem to be alternatives that we are tracking down.  But writing consistent, 
maintainable and testable JavaScript is the priority.

Additionally, for anyone still requiring a non-JavaScript enabled path to 
manage an OpenStack implementation, the CLIs work wonderfully for that.

-David Lyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/018629.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2013-November/019894.html
[3] http://eavesdrop.openstack.org/meetings/horizon/

 -Original Message-
 From: Radomir Dopieralski [mailto:openst...@sheep.art.pl]
 Sent: Monday, November 25, 2013 9:32 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [horizon] JavaScript use policy
 
 Hello everyone,
 
 there has been some talks about this behind the stage for a while, but I
 think that we need to have this discussion here at last and make a
 deicsion. We need a clear, concrete and enforcable policy about the use
 of client-side JavaScript in Horizon. The current guideline it would be
 nice if it worked without JavaScript doesn't really cut it, and due to
 it not being enforced, it's really uneven across the project.
 
 This is going to be a difficult discussion, and it's likely to get very
 emotional. There are different groups of users and developers with
 different tasks and needs, and it may be hard to get everyone to agree
 on a single option. But I believe it will be beneficial for the project
 in the long run to have a clear policy on this.
 
 As I see it, we have basically three options about it. The two extreme
 approaches are the simplest: either require everything to work without
 JavaScript, or simply refuse to work with JavaScript disabled. The third
 option is in reality more like another three hundred options, because it
 would basically specify what absolutely has to work without JavaScript,
 and what is allowed to break. Personally I have no opinion about what
 would be best, but I do have a number of questions that I would like you
 to consider:
 
 1. Are the users of Horizon likely to be in a situation, where they need
 to use JavaScript-less browser and it's not more convenient to use the
 command-line tools?
 
 2. Are there users of Horizon who, for whatever reason (security,
 disability, weak hardware), prefer to use a browser with JavaScript
 disabled?
 
 3. Designing for the web constrains the designs in certain ways. Are
 those constrains hurting us so much? Do you know any examples in Horizon
 right now 

Re: [openstack-dev] [Nova] Remove ratelimits from nova v3 api

2013-11-25 Thread Joshua Harlow
+2

Turnstile seems to be a good middle ground (other companies I know have 
solutions for this problem via other software), does anyone have operational 
knowledge they can share about how turnstile has worked out for them?

Sent from my really tiny device...

 On Nov 25, 2013, at 9:22 AM, Kevin L. Mitchell 
 kevin.mitch...@rackspace.com wrote:
 
 On Mon, 2013-11-25 at 07:38 -0500, David Hill wrote:
   Is removing the ratelimiting code part of the roadmap?  I fear that
 if it's removed now, it may have to be re-implemented in the future as
 the cloud deployments become more mainstream ... Without ratelimit,
 some evil users could DOS Openstack !
 
 I'm going to suggest that something like Turnstile[1] with
 nova_limits[2] is the way to go here for ratelimiting.
 
 [1] https://pypi.python.org/pypi/turnstile
 [2] https://pypi.python.org/pypi/nova_limits
 -- 
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] New meeting time

2013-11-25 Thread Flavio Percoco

On 25/11/13 17:05 +, Amit Gandhi wrote:

Works for me.


Works for me!

--
@flaper87
Flavio Percoco


pgpkFlvVx8sKs.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Joshua Harlow
+2

Sent from my really tiny device...

 On Nov 25, 2013, at 5:02 AM, Davanum Srinivas dava...@gmail.com wrote:
 
 Many thanks to everyone who helped with the many fixes. Kudos to
 Joe/Clark for spear heading the effort!
 
 -- dims
 
 On Mon, Nov 25, 2013 at 12:00 AM, Joe Gordon joe.gord...@gmail.com wrote:
 Hi All,
 
 TL;DR Last week the gate got wedged on nondeterministic failures. Unwedging
 the gate required drastic actions to fix bugs.
 
 Starting on November 15th, gate jobs have been getting progressively less
 stable with not enough attention given to fixing the issues, until we got to
 the point where the gate was almost fully wedged.  No one bug caused this,
 it was a collection of bugs that got us here. The gate protects us from code
 that fails 100% of the time, but if a patch fails 10% of the time it can
 slip through.  Add a few of these bugs together and we get the gate to a
 point where the gate is fully wedged and fixing it without circumventing the
 gate (something we never want to do) is very hard.  It took just 2 new
 nondeterministic bugs to take us from a gate that mostly worked, to a gate
 that was almost fully wedged.  Last week we found out Jeremy Stanley (fungi)
 was right when he said, nondeterministic failures breed more
 nondeterministic failures, because people are so used to having to reverify
 their patches to get them to merge that they are doing so even when it's
 their patch which is introducing a nondeterministic bug.
 
 Side note: This is not the first time we wedge the gate, the first time was
 around September 26th, right when we were cutting Havana release candidates.
 In response we wrote elastic-recheck
 (http://status.openstack.org/elastic-recheck/) to better track what bugs we
 were seeing.
 
 Gate stability according to Graphite: http://paste.openstack.org/show/53765/
 (they are huge because they encode entire queries, so including as a
 pastebin).
 
 After sending out an email to ask for help fixing the top known gate bugs
 (http://lists.openstack.org/pipermail/openstack-dev/2013-November/019826.html),
 we had a few possible fixes. But with the gate wedged, the merge queue was
 145 patches  long and could take days to be processed. In the worst case,
 none of the patches merging, it would take about 1 hour per patch. So on
 November 20th we asked for a freeze on any non-critical bug fixes (
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019941.html
 ), and kicked everything out of the merge queue and put our possible bug
 fixes at the front. Even with these drastic measures it still took 26 hours
 to finally unwedge the gate. In 26 hours we got the check queue failure rate
 (always higher then the gate failure rate) down from around 87% failure to
 below 10% failure. And we still have many more bugs to track down and fix in
 order to improve gate stability.
 
 
 8 Major bug fixes later, we have the gate back to a reasonable failure rate.
 But how did things get so bad? I'm glad you asked, here is a blow by blow
 account.
 
 The gate has not been completely stable for a very long time, and it only
 took two new bugs to wedge the gate. Starting with the list of bugs we
 identified via elastic-recheck, we fixed 4 bugs that have been in the gate
 for a few weeks already.
 
 
 https://bugs.launchpad.net/bugs/1224001 test_network_basic_ops fails
 waiting for network to become available
 
 https://review.openstack.org/57290 was the fix which depended on
 https://review.openstack.org/53188 and https://review.openstack.org/57475.
 
 This fixed a race condition where the IP address from DHCP was not received
 by the VM at the right time. Minimize polling on the agent is now defaulted
 to True, which should reduce the time needed for configuring an interface on
 br-int consistently.
 
 https://bugs.launchpad.net/bugs/1252514 Swift returning errors when setup
 using devstack
 
 Fix https://review.openstack.org/#/c/57373/
 
 There were a few swift related problems that were sorted out as well. Most
 had to do with tuning swift properly for its use as a glance backend in the
 gate, ensuring that timeout values were appropriate for the devstack test
 slaves (in
 
 resource constrained environments, the swift default timeouts could be
 tripped frequently (logs showed the request would have finished successfully
 given enough time)). Swift also had a race-condition in how it constructed
 its sqlite3
 
 files for containers and accounts, where it was not retrying operations when
 the database was locked.
 
 https://bugs.launchpad.net/swift/+bug/1243973 Simultaneous PUT requests for
 the same account...
 
 Fix https://review.openstack.org/#/c/57019/
 
 This was not on our original list of bugs, but while in bug fix mode, we got
 this one fixed as well
 
 https://bugs.launchpad.net/bugs/1251784 nova+neutron scheduling error:
 Connection to neutron failed: Maximum attempts reached
 
 Fix https://review.openstack.org/#/c/57509/
 
 Uncovered on mailing list
 

Re: [openstack-dev] [Trove] configuration groups using overrides file

2013-11-25 Thread Craig Vyvial
Denis,

I'm proposing that #3 and #4 sorta be swapped from your list where we merge
the config group parameters into the main config and send down the file
like we do now. So the guest does not need to handle the merging. The logic
is the same just the location of the logic to merge be handled by the
service rather than at the guest.

Most of the merging logic could be part of the jinja template we already
have.

This will allow the guest to be agnostic of the type of config file and
less logic in the guest.


-Craig




On Wed, Nov 13, 2013 at 1:19 PM, Denis Makogon dmako...@mirantis.comwrote:

 I would like to see this functionality in the next way:
 1. Creating parameters group.
 2. Validate and Save.
 3. Send to an instance those parameters in dict representation.
 4. Merge into main config.

 PS: #4 is database specific, so it's should be handled by manager.


 2013/11/13 Craig Vyvial cp16...@gmail.com

 We need to determine if we should not use a separate file for the
 overrides config as it might not be supported by all dbs trove supports.
 (works well for mysql but might not for cassandra as we discussed in the
 irc channel)

 To support this for all dbs we could setup the jinja templates to add the
 configs to the end of the main config file (my.cnf for mysql example).


 -Craig Vyvial

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Working group on language packs

2013-11-25 Thread Georgy Okrokvertskhov
Hi,

Just for clarification for Windows images, I think Windows image creation
is closer to Docker approach. In order to create a special Windows image we
use KVM\QEMU VM with initial base image, then install all necessary
components, configure them and then run special tool sysprep to remove all
machine specific information like passwords and SIDS and then create a
snapshot.

I got an impression that Docker does the same, installs application on
running VM and then creates a snapshot.

It looks like that this can be done with using Heat + HOT software
orchestration\Deployment tools without any additional services. This
solution scales very well as all configuration steps are executed inside a
VM.

Thanks
Georgy


On Sat, Nov 23, 2013 at 4:30 PM, Clayton Coleman ccole...@redhat.comwrote:



  On Nov 23, 2013, at 6:48 PM, Robert Collins robe...@robertcollins.net
 wrote:
 
  Ok, so no - diskimage-builder builds regular OpenStack full disk disk
 images.
 
  Translating that to a filesystem is easy; doing a diff against another
  filesystem version is also doable, and if the container service for
  Nova understands such partial container contents you could certainly
  glue it all in together, but we don't have any specific glue for that
  today.
 
  I think docker is great, and if the goal of solum is to deploy via
  docker, I'd suggest using docker - no need to make diskimage-builder
  into a docker clone.
 
  OTOH if you're deploying via heat, I think Diskimage-builder is
  targeted directly at your needs : we wrote it for deploying OpenStack
  after all.

 I think we're targeting all possible deployment paths, rather than just
 one.  Docker simply represents one emerging direction for deployments due
 to its speed and efficiency (which vms can't match).

 The base concept (images and image like constructs that can be started by
 nova) provides a clean abstraction - how those images are created is
 specific to the ecosystem or organization.  An organization that is heavily
 invested in a particular image creation technology already can still take
 advantage of Solum, because all that is necessary for Solum to know about
 is a thin shim around transforming that base image into a deployable image.
  The developer and administrative support roles can split responsibilities
 - one that maintains a baseline, and one that consumes that baseline.

 
  -Rob
 
 
  On 24 November 2013 12:24, Adrian Otto adrian.o...@rackspace.com
 wrote:
 
  On Nov 23, 2013, at 2:39 PM, Robert Collins robe...@robertcollins.net
  wrote:
 
  On 24 November 2013 05:42, Clayton Coleman ccole...@redhat.com
 wrote:
 
  Containers will work fine in diskimage-builder. One only needs to
 hack
  in the ability to save in the container image format rather than
 qcow2.
 
  That's good to know.  Will diskimage-builder be able to break those
 down into multiple layers?
 
  What do you mean?
 
  Docker images can be layered. You can have a base image on the bottom,
 and then an arbitrary number of deltas on top of that. It essentially works
 like incremental backups do. You can think of it as each layer has a
 parent image, and if they all collapse together, you get the current state.
 Keeping track of past layers gives you the potential for rolling back to a
 particular restore point, or only distributing incremental changes when you
 know that the previous layer is already on the host.
 
 
  -Rob
 
 
  --
  Robert Collins rbtcoll...@hp.com
  Distinguished Technologist
  HP Converged Cloud
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Robert Collins rbtcoll...@hp.com
  Distinguished Technologist
  HP Converged Cloud
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Clint Byrum
Excerpts from Monty Taylor's message of 2013-11-25 06:52:02 -0800:
 
 On 11/25/2013 04:23 AM, Clint Byrum wrote:
  Excerpts from Joe Gordon's message of 2013-11-24 21:00:58 -0800:
  Hi All,
 
  TL;DR Last week the gate got wedged on nondeterministic failures. Unwedging
  the gate required drastic actions to fix bugs.
 
 
  snip
  
  (great write-up, thank you for the details, and thank you for fixing
  it!)
  
 
  Now that we have the gate back into working order, we are working on the
  next steps to prevent this from happening again.  The two most immediate
  changes are:
 
 - Doing a better job of triaging gate bugs  (
 
  http://lists.openstack.org/pipermail/openstack-dev/2013-November/020048.html
  ).
 
 
 - In the next few days we will remove  'reverify no bug' (although you
 will still be able to run 'reverify bug x'.
 
  
  I am curious, why not also disable 'recheck no bug'?
 
 recheck no bug still has a host of valid use cases. Often times I use it
 when I upload a patch, it fails because of a thing somewhere else, we
 fix that, and I need to recheck the patch because it should work now.
 
 It's also not nearly as dangerous as reverify no bug.
 

...somewhere else, we fix that... -- Would it be useful to track that
in a bug? Would that help elastic-recheck work better if all the problems
caused by a bug elsewhere were reported as bugs?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] configuration groups using overrides file

2013-11-25 Thread Denis Makogon
I disagree. Guest should be able to do (for now two things):
1. Installing db, post-install configuration (dpkg reconfigure, etc.)
2. Allowing to apply outcome configuration in dictionary format.

As i mentioned earlier. There could be a problem with updating full config
file. Reason is simple: post-configuration could take really custom
database post-install configuration (taskmanager doesn't knew anything
about that),
if taskamanger would do so - it would be a huge bug, because noone
garanties you that database would work with default configuration
(out-of-box).
To be precise, noone garanties that database will be in active state after
simple yum install or apt-get install. Simple example of post-install
configuration: database would listen for requests on localhost.

Whole workflow of configuration parameters should take into accout config
which already placed on the VM, not that one which lays at template on
server-side.

As the beggining i suggest to make next flow:
1. Creating parameters group.
2. Validate and Save.
3. Send to an instance those parameters in dict representation.
4. Let manager decide how it should be processed (overrides.cnf, merging,
etc.)

Any other flow would breake consistency of post-install configuration.

Best regards, Denis M.


2013/11/25 Craig Vyvial cp16...@gmail.com

 Denis,

 I'm proposing that #3 and #4 sorta be swapped from your list where we
 merge the config group parameters into the main config and send down the
 file like we do now. So the guest does not need to handle the merging. The
 logic is the same just the location of the logic to merge be handled by the
 service rather than at the guest.

 Most of the merging logic could be part of the jinja template we already
 have.

 This will allow the guest to be agnostic of the type of config file and
 less logic in the guest.


 -Craig




 On Wed, Nov 13, 2013 at 1:19 PM, Denis Makogon dmako...@mirantis.comwrote:

 I would like to see this functionality in the next way:
 1. Creating parameters group.
 2. Validate and Save.
 3. Send to an instance those parameters in dict representation.
 4. Merge into main config.

 PS: #4 is database specific, so it's should be handled by manager.


 2013/11/13 Craig Vyvial cp16...@gmail.com

 We need to determine if we should not use a separate file for the
 overrides config as it might not be supported by all dbs trove supports.
 (works well for mysql but might not for cassandra as we discussed in the
 irc channel)

 To support this for all dbs we could setup the jinja templates to add
 the configs to the end of the main config file (my.cnf for mysql example).


 -Craig Vyvial

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Pete Zaitcev added to core

2013-11-25 Thread John Dickinson
Pete Zaitcev has been involved with Swift for a long time, both by contributing 
patches and reviewing patches. I'm happy to announce that he's accepted the 
responsibility of being a core reviewer for Swift.

Congrats, Pete.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-11-25 Thread Mike Wilson
Adding Jun to this thread since gmail is failing him.


On Tue, Nov 19, 2013 at 10:44 AM, Amir Sadoughi amir.sadou...@rackspace.com
 wrote:

  Yes, my work has been on ML2 with neutron-openvswitch-agent.  I’m
 interested to see what Jun Park has. I might have something ready before he
 is available again, but would like to collaborate regardless.

  Amir



  On Nov 19, 2013, at 3:31 AM, Kanthi P pavuluri.kan...@gmail.com wrote:

  Hi All,

  Thanks for the response!
 Amir,Mike: Is your implementation being done according to ML2 plugin

  Regards,
 Kanthi


 On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson geekinu...@gmail.com wrote:

 Hi Kanthi,

  Just to reiterate what Kyle said, we do have an internal implementation
 using flows that looks very similar to security groups. Jun Park was the
 guy that wrote this and is looking to get it upstreamed. I think he'll be
 back in the office late next week. I'll point him to this thread when he's
 back.

  -Mike


 On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery) 
 kmest...@cisco.com wrote:

 On Nov 18, 2013, at 4:26 PM, Kanthi P pavuluri.kan...@gmail.com wrote:
   Hi All,
 
  We are planning to implement quantum security groups using openflows
 for ovs plugin instead of iptables which is the case now.
 
  Doing so we can avoid the extra linux bridge which is connected
 between the vnet device and the ovs bridge, which is given as a work around
 since ovs bridge is not compatible with iptables.
 
  We are planning to create a blueprint and work on it. Could you please
 share your views on this
 
  Hi Kanthi:

 Overall, this idea is interesting and removing those extra bridges would
 certainly be nice. Some people at Bluehost gave a talk at the Summit [1] in
 which they explained they have done something similar, you may want to
 reach out to them since they have code for this internally already.

 The OVS plugin is in feature freeze during Icehouse, and will be
 deprecated in favor of ML2 [2] at the end of Icehouse. I would advise you
 to retarget your work at ML2 when running with the OVS agent instead. The
 Neutron team will not accept new features into the OVS plugin anymore.

 Thanks,
 Kyle

 [1]
 http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/towards-truly-open-and-commoditized-software-defined-networks-in-openstack
 [2] https://wiki.openstack.org/wiki/Neutron/ML2

  Thanks,
  Kanthi
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Jeremy Stanley
On 2013-11-25 09:55:03 -0800 (-0800), Clint Byrum wrote:
 Excerpts from Monty Taylor's message of 2013-11-25 06:52:02 -0800:
[...]
  recheck no bug still has a host of valid use cases. Often times
  I use it when I upload a patch, it fails because of a thing
  somewhere else, we fix that, and I need to recheck the patch
  because it should work now.
  
  It's also not nearly as dangerous as reverify no bug.
  
 
 ...somewhere else, we fix that... -- Would it be useful to track
 that in a bug? Would that help elastic-recheck work better if all
 the problems caused by a bug elsewhere were reported as bugs?

Most of these are just working around the current inability to
mechanically express change interdependencies between projects in
our tooling. You push related changes to multiple projects right now
and there may be only one sequence in which they can be successfully
applied but the result is that all changes besides the next one in
series are going to have negative check results until their turns
come. There's nothing stopping us from using an informational bug
to key them on, but creating one every time it's needed might also
come across as makework (or maybe we recheck them all on bug
1021879, but then I worry that would skew our statistics gathering).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Handling of ovs command errors

2013-11-25 Thread Salvatore Orlando
Thanks Kyle,

More comments inline.

Salvatore


On 25 November 2013 16:03, Kyle Mestery (kmestery) kmest...@cisco.comwrote:

 On Nov 25, 2013, at 8:28 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
 
  Hi,
 
  I've been recently debugging some issues I've had with the OVS agent,
 and I found out that in many  cases (possibly every case) the code just
 logs errors from ovs-vsctl and ovs-ofctl without taking any action in the
 control flow.
 
  For instance, the routine which should do the wiring for a port,
 port_bound [1], does not react in any way if it fails to configure the
 local vlan, which I guess means the port would not be able to send/receive
 any data.
 
  I'm pretty sure there's a good reason for this which I'm missing at the
 moment. I am asking because I see a pretty large number of ALARM_CLOCK
 errors returned by OVS commands in gate logs (see bug [2]), and I'm not
 sure whether it's ok to handle them as the OVS agent is doing nowadays.
 
 Thanks for bringing this up Salvatore. It looks like the underlying
 run_vstcl [1] provides an ability to raise exceptions on errors, but this
 is not used by most of the callers of run_vsctl. Do you think we should be
 returning the exceptions back up the stack to callers to handle? I think
 that may be a good first step.


I think it makes sense to start to handle errors; as they often happen in
the agent's rpc loop simply raising will probably just cause the agent to
crash.
I looked again at the code and it really seems it's silently ignoring
errors from ovs command.
This actually makes sense in some cases. For instance the l3 agent might
remove a qr-xxx or qg-xxx port while the l2 agent is in the middle of its
iteration.

There are however cases in which the exception must be handled.
In cases like the ALARM_CLOCK error, either a retry mechanism or marking
the port for re-syncing at the next iteration might make sense.
Other error cases might be unrecoverable; for instance when a port
disappears. In that case it seems reasonable to put the relevant neutron
port in ERROR state, so that the user is aware that the port anymore.


 Thanks,
 Kyle

 [1]
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ovs_lib.py#L52

  Regards,
  Salvatore
 
  [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#L599
  [2] https://bugs.launchpad.net/neutron/+bug/1254520
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove][Savanna][Murano][Heat] Unified Agent proposal discussion at Summit

2013-11-25 Thread Mike Spreitzer
I am sorry I missed that session, but am interested in the topic.  This is 
very relevant to Heat, where we are working on software configuration in 
general.  I desire that Heat's ability to configure software will meet the 
needs of Trove, Savanna, and Murano.

At IBM we worked several Hadoop examples, with some similar to (but 
distinct from) Heat for software configuration and also something doing 
holistic infrastructure scheduling (so that, e.g., we could get locally 
attached storage).  The software was described using an additional concept 
for software components, and we expressed the automation as chef roles. 
For coordination between VMs we used Ruby metaprogramming to intercept 
access to certain members of the node[][] arrays, replacing plain array 
access with distributed reads and writes to shared variables (which can 
not be read until after they are written, thus providing synchronization 
as well as data dependency).  We used ZooKeeper to implement those shared 
variables, but that is just one possible implementation approach; I think 
wait condition/handle/signal makes more sense as the one to use in 
OpenStack.

The current thinking in Heat is to make a generic agent based on 
os-collect-config; it could be specialized to Heat by a hook.  The agent 
would poll for stuff to do and then do it; in the chef case, stuff could 
be, e.g., a role in a cookbook.  I think this could meet the requirements 
listed on https://etherpad.openstack.org/p/UnifiedAgents

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-25 Thread Steven Hardy
All,

So, lately we've been seeing more patches posted proposing added
functionality to Heat (the API and template syntax) related to
development of UI functionality.

This makes me both happy (because folks want to use Heat!) and sad (because
it's evident there are several proprietary UI's being developed, rather
than collaboration focussed on making Horizon Heat functionality great)

One of the most contentious ones currently is that proposing adding
metadata to the HOT template specification, designed to contain data which
Heat does not use - my understanding is the primary use-case for this is
some UI for managing applictions via Heat:

https://review.openstack.org/#/c/56450/

I'd like to attempt to break down some of the communication barriers which
are increasingly apparent, and refocus our attention on what the actual
requirements are, and how they relate to improving Heat support in Horizon.

The list of things being proposed I can think of are (I'm sure there are more):
- Stack/template metadata
- Template repository/versioning
- Management-API functionality (for a Heat service administrator)
- Exposing build information

I think the template repository and template metadata items are closely
related - if we can figure out how we expect users to interact with
multiple versions of templates, access public template repositories, attach
metadata/notes to their template etc in Horizon, then I think much of the
requirement driving those patches can be satisfied (without necessarily
implementing that functionality or storing that data in Heat).

For those who've already posted patches and got negative feedback, here's
my plea - please, please, start communicating the requirements and
use-cases, then we can discuss the solution together.  Just posting a
solution with no prior discussion and a minimal blueprint is a really slow
way to get your required functionality into Heat, and it's frustrating for
everyone :(

So, lets have a Heat-UI-requirements amnesty, what UI related functionality
do you want in Heat, and why (requirements, use-case/user-story)

Hopefully if we can get these requirements out in the open, it will help
formulate a roadmap for future improvement of Heat support in Horizon.

Obviously non-Horizon UI users of Heat also directly benefit from this, but
IMO we must focus the discussion primarily on what makes sense for Horizon,
not what makes sense for $veiled_reference_to_internal_project, as the
wider community don't know anything about the latter, so will naturally
resist changes related to it.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-25 Thread Jaromir Coufal

Hey Rob,

can we add 'Slick Overcloud deployment through the UI' to the list? 
There was no session about that, but we discussed it afterwords and 
agreed that it is high priority for Icehouse as well.


I just want to keep it on the list, so we are aware of that.

Thanks
-- Jarda

On 2013/25/11 02:17, Robert Collins wrote:

I've now gone through and done the post summit cleanup of blueprints
and migration of design docs into blueprints as appropriate.

We had 50 odd blueprints, many of where were really not effective
blueprints - they described single work items with little coordination
need, were not changelog items, etc. I've marked those obsolete.
Blueprints are not a discussion forum - they are a place that [some]
discussions can be captured, but anything initially filed there will
take some time before folk notice it - and the lack of a discussion
mechanism makes it very hard to reach consensus there. Could TripleO
interested folk please raise things here, on the dev list initially,
and we'll move it to lower latency // higher bandwidth environments as
needed?

 From the summit we had the following outcomes
https://etherpad.openstack.org/p/icehouse-deployment-hardware-autodiscovery
- needs to be done in ironic

https://blueprints.launchpad.net/tripleo/+spec/tripleo-icehouse-modelling-infrastructure-sla-services
- needs more discussion to tease concerns out - in particular I want
us to get to
a problem statement that Nova core folk understand :)

https://blueprints.launchpad.net/tripleo/+spec/tripleo-icehouse-ha-production-configuration
- this is ready for folk to act on at any point

https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies
- this is ready for folk to act on - but it's fairly shallow, since
most of the answer was 'discuss with heat' :)

https://blueprints.launchpad.net/tripleo/+spec/tripleo-icehouse-scaling-design
- this is ready for folk to act on; the main thing was gathering a
bunch of data so we can make good decisions from here on out

The stable branches decision has been documented in the wiki - all done.

Cheers,
Rob
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove ratelimits from nova v3 api

2013-11-25 Thread Joe Gordon
On Mon, Nov 25, 2013 at 4:38 AM, David Hill david.h...@ubisoft.com wrote:

 Hi Alex,

Is removing the ratelimiting code part of the roadmap?  I fear that if
 it's removed now, it may have to be re-implemented in the future as the
 cloud deployments become more mainstream ... Without ratelimit, some evil
 users could DOS Openstack !


The issue is that the existing ratelimit code doesn't actually work as
expected.  For more details see:

https://review.openstack.org/#/c/34774/3/nova/api/openstack/compute/limits.py
https://review.openstack.org/#/c/34821/3


Dave


 
 From: Alex Xu [x...@linux.vnet.ibm.com]
 Sent: November 25, 2013 03:53
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] Remove ratelimits from nova v3 api

 Hi, guys,

 Looks like ratelimits is not really useful. The reason already pointed out
 by this patch:
 https://review.openstack.org/#/c/34821/ , Thanks Joe for point it out.

 So v3 API is a chance to cleanup those stuff. If there isn't anyone object
 it. I will send patch
 to get ride of ratelimits code for v3 API.

 Thanks
 Alex

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] What's Up Doc? Nov 20 2013

2013-11-25 Thread Anne Gentle
On Thu, Nov 21, 2013 at 1:53 AM, Michael Still mi...@stillhq.com wrote:

 On Thu, Nov 21, 2013 at 7:54 AM, Anne Gentle a...@openstack.org wrote:

  5. Doc tools updates:
 
  David Cramer tells me he'll cut one more release of the clouddocs-tools
  maven plugin then it'll be put into Gerrit workflow in the OpenStack
  organization. Thanks to David and Zaro for the hard work here over many
  months. I'd encourage Stackers to see how they can get involved in
  collaborating on our doc build tool once it's in our OpenStack systems.
 
  Some reps from infra and myself will meet with O'Reilly technical folks
 on
  Monday to finalize the workflow for the operations-guide repository.
 Once we
  have it fleshed out I'll communicate it to the openstack-docs list.

 One day in the infinite future when the gate is fixed
 https://review.openstack.org/#/c/56158/ and
 https://review.openstack.org/#/c/57332/ will merge. This is something
 Lana and I cooked up where the script which creates DocImpact bugs has
 been tweaked to be able to add subscribers.

 The basic idea is that you define a group of author email addresses
 (your team at work perhaps), and then you can assign people to
 subscribe to DocImpact bugs created by those people. This will allow
 Lana to keep a better track of the DocImpact bugs her co-workers are
 creating.


I really really like this concept and hope to spread the idea to more
teams. How can I help get the word out? A couple of ideas:

- Include in next What's Up Doc?
- Blog post about it that'll go to planet.openstack.org
- Tweet the heck outta those

Anything else?
Anne


 I'm hoping to trick other development teams into using this feature,
 and embedding tech writers into their teams. Frankly, if developers
 land features and they're never documented that significantly devalues
 the feature. I'm hoping developers will see the value in having a tech
 writer work closely with them to get their features documented. Also,
 it will hopefully lead to more tech writers being hired by the various
 development teams.

 Lana and I will advertise this better once its landed and we've used
 it long enough to ensure it works right.

 Michael

 --
 Rackspace Australia

 ___
 Openstack-docs mailing list
 openstack-d...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EC2 Filter

2013-11-25 Thread Joe Gordon
On Mon, Nov 25, 2013 at 6:22 AM, Sebastian Porombka 
porom...@uni-paderborn.de wrote:

 Hi Folks.

 I stumbled over the lacking of EC2 Api filter support on Openstack and
 justinsb¹s (and the other people I forgot to mention here) attempt [1] to
 implement this against diabolo.

 I¹m highly interested in this feature and would like (to try) to implement
 this features against the latest base. My problem is, that I¹m unable find
 informations about the rejection of this set of patches in 2011.



I am also not sure why this didn't get merged, but I don't see why we would
block this code today, especially if it comes with some tempests tests too.



 Can anybody help me?

 Greetings
   Sebastian

 [1] https://code.launchpad.net/~justin-fathomdb/nova/ec2-filters
 --
 Sebastian Porombka, M.Sc.
 Zentrum für Informations- und Medientechnologien (IMT)
 Universität Paderborn

 E-Mail: porom...@uni-paderborn.de
 Tel.: 05251/60-5999
 Fax: 05251/60-48-5999
 Raum: N5.314

 
 Q: Why is this email five sentences or less?
 A: http://five.sentenc.es

 Please consider the environment before printing this email.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday November 26th at 19:00 UTC

2013-11-25 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday November 26th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.


-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Plan for failing successful tempest jobs when new ERRORs appear in logs

2013-11-25 Thread Joe Gordon
On Mon, Nov 18, 2013 at 2:58 PM, David Kranz dkr...@redhat.com wrote:

 So we are close to being able to start doing this. The current whitelist
 is here https://github.com/openstack/tempest/blob/master/etc/
 whitelist.yaml. I have a find-errors script that watches for successful
 builds and pulls out the non-whitelisted errors. For the past few weeks I
 have been doing the following:

 1. Run find-errors
 2. File bugs on any new errors
 3. Add to whitelist
 4. Repeat

 There are still some very flaky cases. I will do one more iteration of
 this. Right now this script https://github.com/openstack/
 tempest/blob/master/tools/check_logs.py dumps non-whitelisted errors to
 the console log but
 always returns success. The question now is how long should all jobs run
 with no new errors showing, before changing check_logs.py to fail if there
 are any new errors?


The sooner the better.



  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-11-25 Thread David Chadwick
Hi Arvind

I have just added some comments to your blueprint page

regards

David


On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,
 
  
 
 Based on our discussion in design summit , I have redone the service_id
 binding with roles BP
 https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.
 I have added a new BP (link below) along with detailed use case to
 support this BP.
 
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition
 
 Below etherpad link has some proposals for Role REST representation and
 pros and cons analysis
 
  
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
  
 
 Please take look and let me know your thoughts.
 
  
 
 It would be awesome if we can discuss it in tomorrow’s meeting.
 
  
 
 Thanks,
 
 Arvind
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Plan for failing successful tempest jobs when new ERRORs appear in logs

2013-11-25 Thread Joe Gordon
On Mon, Nov 25, 2013 at 11:06 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Mon, Nov 18, 2013 at 2:58 PM, David Kranz dkr...@redhat.com wrote:

 So we are close to being able to start doing this. The current whitelist
 is here https://github.com/openstack/tempest/blob/master/etc/
 whitelist.yaml. I have a find-errors script that watches for
 successful builds and pulls out the non-whitelisted errors. For the past
 few weeks I have been doing the following:

 1. Run find-errors
 2. File bugs on any new errors
 3. Add to whitelist
 4. Repeat

 There are still some very flaky cases. I will do one more iteration of
 this. Right now this script https://github.com/openstack/
 tempest/blob/master/tools/check_logs.py dumps non-whitelisted errors to
 the console log but
 always returns success. The question now is how long should all jobs run
 with no new errors showing, before changing check_logs.py to fail if there
 are any new errors?


 The sooner the better.




Also this is awesome. I expect this to significantly help with keeping the
gate stable.




  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient: uses deprecated keyring.backend.$keyring

2013-11-25 Thread Joe Gordon
On Mon, Nov 25, 2013 at 2:48 AM, Thomas Goirand z...@debian.org wrote:

 On 11/25/2013 03:57 AM, Morgan Fainberg wrote:
  Hi Thomas,
 
  How pressing is this issue?

 It seems it's blocking some python package transition from Sid to
 Jessie, and impacting the Gnome guys, so it's quite annoying for Debian.
 So much that some wanted to NMU the novaclient package. At least, that's
 my understanding. If a solution could be found in the next following
 days/weeks, that'd be nice!

  I know there is work being done to unify
  token/auth implementation across the clients.  I want to have an idea of
  the heat here so we can look at addressing this directly in novaclient
  if it can't wait until the unification work to come down the line.

 IMO, it'd be best to act now, and go back to it later on when the
 unification thingy is done.


Thanks for including a patch in your debian bug, I went ahead and submitted
the patch along with unfreezing keyring in requirements:

https://review.openstack.org/#/c/58364/
https://review.openstack.org/#/c/58362/




 Cheers,

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-25 Thread Mark Washenberger
On Fri, Nov 22, 2013 at 6:28 PM, Monty Taylor mord...@inaugust.com wrote:



 On 11/22/2013 06:55 PM, Mark Washenberger wrote:
 
 
 
  On Fri, Nov 22, 2013 at 1:13 PM, Robert Collins
  robe...@robertcollins.net mailto:robe...@robertcollins.net wrote:
 
  On 22 November 2013 22:31, Thierry Carrez thie...@openstack.org
  mailto:thie...@openstack.org wrote:
   Robert Collins wrote:
   I don't understand why branches would be needed here *if* the
  breaking
   changes don't impact any supported release of OpenStack.
  
   Right -- the trick is what does supported mean in that case.
  
   When the client libraries were first established as separate
   deliverables, they came up with a blanket statement that the latest
   version could always be used with *any* version of openstack you
 may
   have. The idea being, if some public cloud was still stuck in
  pre-diablo
   times, you could still use the same library to address both this
  public
   cloud and the one which was 2 weeks behind Havana HEAD.
 
  Huh. There are two different directions we use the client in.
 
  Client install - cloud API (of arbitrary version A)
 
  Server install (of arbitrary version B) using the Python library -
  cloud API (version B)
 
  From a gating perspective I think we want to check
  that:
   - we can use the client against some set of cloud versions A
   - that some set of version B where servers running cloud version B
  can use the client against cloud version B
 
  But today we don't test against ancient versions of A or B.
 
  If we were to add tests for such scenarios, I'd strongly argue that
 we
  only add then for case A. Where we are using the client lib in an
  installed cloud, we don't need to test that it can still be used
  against pre-diablo etc: such installed clouds can keep using the old
  client lib.
 
 
  I'm afraid that if a critical bug is found in the old client lib, the
  current path for fixing it is to ask people to update to the latest
  client version, even internally to their old cloud. So would that cause
  a branch for backporting fixes?

 The plan is that the current client libs should always be installable.
 So we would not (and never have) make a branch for backporting fixes.


Yes. I think wires are a bit crossed here, but you and I agree. It seemed
to me that Robert was suggesting that old clouds can internally keep using
old versions of client libs. Which seems wrong since we don't do backports,
so old clouds using old libs would never get security updates.



  FWIW, I don't think the changes glanceclient needs in v1 will break the
  'B' case above. But it does raise a question--if they did, would it be
  sufficient to backport a change to adapt old supported stable B versions
  of, say, Nova, to work with the v1 client? Honestly asking, a big ol' NO
  is okay.

 I'm not sure I follow all the pronouns. Could you re-state this again, I
 think I know what you're asking, but I'd like to be sure.


Sorry for being so vague. I'll try to be specific.

Branch nova/stable/folsom depends on python-glanceclient/master. Suppose we
find that nova/stable/folsom testing is broken when we stage (hopefully
before merging) the breaking changes that are part of the
python-glanceclient v1.0.0 release. Would it be acceptable in this case to
have a compatibility patch to nova/stable/folsom? Or will the only option
be to modify the python-glanceclient patch to maintain compatibility?


Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-25 Thread Zane Bitter

Top-posting *and* replying to myself today :D

I realised that I could have implemented this in less time than I've 
spent arguing about it already, so I did:


https://review.openstack.org/#/c/58357/
https://review.openstack.org/#/c/58358/

cheers,
Zane.


On 19/11/13 23:27, Zane Bitter wrote:

On 19/11/13 19:14, Christopher Armstrong wrote:

On Mon, Nov 18, 2013 at 5:57 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 16/11/13 11:15, Angus Salkeld wrote:

On 15/11/13 08:46 -0600, Christopher Armstrong wrote:

On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter
zbit...@redhat.com mailto:zbit...@redhat.com wrote:

On 15/11/13 02:48, Christopher Armstrong wrote:

On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld
asalk...@redhat.com mailto:asalk...@redhat.com
mailto:asalk...@redhat.com
mailto:asalk...@redhat.com wrote:

 On 14/11/13 10:19 -0600, Christopher Armstrong
wrote:

http://docs.heatautoscale.__ap__iary.io/
http://apiary.io/

 http://docs.heatautoscale.__apiary.io/
http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the
proposed API for
 autoscaling.
 It's written in API-Blueprint format (which
is a simple subset
 of Markdown)
 and provides schemas for inputs and outputs
using JSON-Schema.
 The source
 document is currently at
https://github.com/radix/heat/raw/as-api-spike/
https://github.com/radix/heat/__raw/as-api-spike/
autoscaling.__apibp



https://github.com/radix/__heat/raw/as-api-spike/__autoscaling.apibp

https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
 


 Things we still need to figure out:

 - how to scope projects/domains. put them
in the URL? get them
 from the
 token?
 - how webhooks are done (though this
shouldn't affect the API
 too much;
 they're basically just opaque)

 Please read and comment :)


 Hi Chistopher

 In the group create object you have 'resources'.
 Can you explain what you expect in there? I
thought we talked at
 summit about have a unit of scaling as a nested
stack.

 The thinking here was:
 - this makes the new config stuff easier to
scale (config get
applied
 Â  per scaling stack)

 - you can potentially place notification
resources in the scaling
 Â  stack (think marconi message resource -
on-create it sends a
 Â  message)

 - no need for a launchconfig
 - you can place a LoadbalancerMember resource
in the scaling stack
 Â  that triggers the loadbalancer to add/remove
it from the lb.


 I guess what I am saying is I'd expect an api
to a nested stack.


Well, what I'm thinking now is that instead of
resources (a
mapping of
resources), just have resource, which can be the
template definition
for a single resource. This would then allow the
user to specify a
Stack
resource if they want to provide multiple resources.
How does that
sound?


My thought was this (digging into the implementation
here a bit):

- Basically, the autoscaling code works as it does now:
creates a
template
containing OS::Nova::Server resources (changed from
AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and
creates a
stack in
Heat.
- LaunchConfig can now contain any properties you like
(I'm not 100%
sure
   

[openstack-dev] [Nova] No meeting this week

2013-11-25 Thread Russell Bryant
Greetings,

Since Thursday this week is a holiday in the US, we will skip the Nova
meeting.  We'll meet again next week.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Sean Dague
On 11/25/2013 11:54 AM, Clint Byrum wrote:
 Excerpts from Robert Collins's message of 2013-11-25 01:30:11 -0800:
 On 25 November 2013 22:23, Clint Byrum cl...@fewbar.com wrote:

 I do wonder if we would be able to commit enough resources to just run
 two copies of the gate in parallel each time and require both to pass.
 Doubling the odds* that we will catch an intermittent failure seems like
 something that might be worth doubling the compute resources used by
 the gate.

 *I suck at math. Probably isn't doubling the odds. Sounds
 good though. ;)

 We already run the code paths that were breaking 8 or more times.
 Hundreds of times in fact for some :(.

 The odds of a broken path triggering after it gets through, assuming
 each time we exercise it is equally likely to show it, are roughly
 3/times-exercised-in-landing. E.g. if we run a code path 300 times and
 it doesn't show up, then it's quite possible that it has a 1%
 incidence rate.
 
 We don't run through 300 times of the same circumstances. We may pass
 through indidivual code paths that have a race condition 300 times, but
 the circumstances are probably only right for failure in 1 or 2 of them.
 
 1% overall then, doesn't matter so much as how often does it fail when
 the conditions for failure are optimal. If we can increase the ocurrences
 of the most likely failure conditions, then we do have a better chance
 of catching the failure.

Right, the math of statistics is against us in brute forcing 1% fails to
be blocked in the gate:

Even running the whole test suite 20 times, means those will pass
through 80% of the time unseen:

0.99 ^ 20 = 0.817 - remember we need to do the exponent on the success
rate, as what we are actually trying to figure out is odds that this
will succeed 20 times

So what we actually need to do is use the 1% fails as realizing some
kind of race exists at all, then actively try to create a scenario where
that kind of race happens *a lot* to nail it down, and ensure it never
comes back.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] no meeting this week

2013-11-25 Thread Sean Dague
QA meeting this week would be on the Thanksgiving Holiday in the US, and
as it's still not at an Asia friendly time, I suspect attendance would
be *very* low.

Canceling this week, see you all at the December meeting.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unwedging the gate

2013-11-25 Thread Joe Gordon
On Sun, Nov 24, 2013 at 10:48 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 25 November 2013 19:25, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Sun, Nov 24, 2013 at 9:58 PM, Robert Collins 
 robe...@robertcollins.net
  wrote:
 
  I have a proposal - I think we should mark all recheck bugs critical,
  and the respective project PTLs should actively shop around amongst
  their contributors to get them fixed before other work: we should
  drive the known set of nondeterministic issues down to 0 and keep it
  there.
 
 
 
  Yes! In fact we are already working towards that. See
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/020048.html

 Indeed I saw that thread - I think I'm proposing something slightly
 different, or perhaps 'gate blocking' needs clearing up. Which is -
 that once we have sufficient evidence to believe there is a
 nondeterministic bug in trunk, whether or not the gate is obviously
 suffering, we should consider it critical immediately. I don't think
 we need 24h action on such bugs at that stage - gate blocking zomg
 issues obviously do though!


I see what your saying. That sounds like a good idea, all gate bugs are
critical, but only zomg gate is bad gets 24h action.



 The goal here would be to drive the steady state of 'recheck needed'
 so low that most people never encounter it. And break the social
 pattern that has been building up.


Yes, agreed.



 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Plan for failing successful tempest jobs when new ERRORs appear in logs

2013-11-25 Thread Matthew Treinish
On Mon, Nov 25, 2013 at 11:06:36AM -0800, Joe Gordon wrote:
 On Mon, Nov 18, 2013 at 2:58 PM, David Kranz dkr...@redhat.com wrote:
 
  So we are close to being able to start doing this. The current whitelist
  is here https://github.com/openstack/tempest/blob/master/etc/
  whitelist.yaml. I have a find-errors script that watches for successful
  builds and pulls out the non-whitelisted errors. For the past few weeks I
  have been doing the following:
 
  1. Run find-errors
  2. File bugs on any new errors
  3. Add to whitelist
  4. Repeat
 
  There are still some very flaky cases. I will do one more iteration of
  this. Right now this script https://github.com/openstack/
  tempest/blob/master/tools/check_logs.py dumps non-whitelisted errors to
  the console log but
  always returns success. The question now is how long should all jobs run
  with no new errors showing, before changing check_logs.py to fail if there
  are any new errors?
 
 
 The sooner the better.

+1

I would just turn it on today. This is the week to do it because of the holiday.
Based on my experience with flipping the switch for parallel the only way to 
iron
out all of the kinks is to make it gating so people will notice when something
fails. There will be some pain at first but the end result makes it worth it.

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove][Savanna][Murano][Heat] Unified Agent proposal discussion at Summit

2013-11-25 Thread Sergey Lukjanov
Hi Mike,

thank you for comments.

Could you, please, add a section about Heat requirements for guest agents to 
the https://etherpad.openstack.org/p/UnifiedAgents?

BTW The initial idea was to have some kind of skeleton for building guest 
agents with pluggable transports (message queue, http, meta server, etc.) and 
pluggable command handlers to solve all possible requirements from all projects 
interested in using guest agents.

Thank you.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Nov 25, 2013, at 10:34 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 I am sorry I missed that session, but am interested in the topic.  This is 
 very relevant to Heat, where we are working on software configuration in 
 general.  I desire that Heat's ability to configure software will meet the 
 needs of Trove, Savanna, and Murano. 
 
 At IBM we worked several Hadoop examples, with some similar to (but distinct 
 from) Heat for software configuration and also something doing holistic 
 infrastructure scheduling (so that, e.g., we could get locally attached 
 storage).  The software was described using an additional concept for 
 software components, and we expressed the automation as chef roles.  For 
 coordination between VMs we used Ruby metaprogramming to intercept access to 
 certain members of the node[][] arrays, replacing plain array access with 
 distributed reads and writes to shared variables (which can not be read until 
 after they are written, thus providing synchronization as well as data 
 dependency).  We used ZooKeeper to implement those shared variables, but that 
 is just one possible implementation approach; I think wait 
 condition/handle/signal makes more sense as the one to use in OpenStack. 
 
 The current thinking in Heat is to make a generic agent based on 
 os-collect-config; it could be specialized to Heat by a hook.  The agent 
 would poll for stuff to do and then do it; in the chef case, stuff could 
 be, e.g., a role in a cookbook.  I think this could meet the requirements 
 listed on https://etherpad.openstack.org/p/UnifiedAgents 
 
 Regards, 
 Mike___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Cinder] Attaching Cinder volumes to baremetal instance

2013-11-25 Thread Devananda van der Veen
No -- attach volumes is not implemented in Ironic yet. I think it would be
great if someone wants to work on it. There was some discussion at the
summit about cinder support, in particular getting boot-from-volume to work
with Ironic, but no one has come forward since then with code or blueprints.

-Deva


On Fri, Nov 22, 2013 at 11:14 AM, Rohan Kanade openst...@rohankanade.comwrote:


 Hey guys, just starting out with Ironic, had a silly question.

 Can we attach bootable or non bootable plain cinder volumes during either
 provisioning of the baremetal instance or after provisioning the baremetal
 instance?

 I have seen a attach_volume method in the LibvirtVolumeDriver of the
 nova baremetal driver. So got curious.

 Thanks,
 Rohan Kanade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Keystone client support for Py3K

2013-11-25 Thread Flavio Percoco

Greetings,

I noticed there's no Py3K gate for keysotneclient so, I just wanted to
know what's the state of $subject and if there's any milestone for it.

We're working on keeping Marconi's client Py3K compliant and this is,
unfortunately, the only bit that's not supported. As for now, we
disable keystone auth when running under Py3K.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgp3VWRo_myJf.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Drop python 2.6 support

2013-11-25 Thread Matt Riedemann



On Monday, November 25, 2013 7:35:51 AM, Zhi Kun Liu wrote:

Hi all,

I saw that Tempest will drop python 2.6 support in design summit
https://etherpad.openstack.org/p/icehouse-summit-qa-parallel.


Drop tempest python 2.6 support:Remove all nose hacks in the code


Delete nose, use unittest2 with testr/testtools and everything
*should* just work (tm)



Does that mean Tempest could not run on python 2.6 in the future?

--
Regards,
Zhi Kun Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Well so if you're running a single-node setup of OpenStack on a VM on 
top of RHEL 6 and running Tempest from there, yeah, this is an 
inconvenience, but it's a pretty simple fix, right?  I just run my 
OpenStack RHEL 6 VM and have an Ubuntu 12.04 or Fedora 19 or whatever 
distro-that-supports-py27 I want running Tempest against it.  Am I 
missing something?


FWIW, trying to keep up with the changes in Tempest when you're running 
on python 2.6 is no fun, especially with how tests are skipped 
(skipException causes a test failure if you don't have a special 
environment variable set).  Plus you don't get parallel execution of 
the tests.


So I agree with the approach even though it's going to hurt me in the 
short-term.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Keystone client support for Py3K

2013-11-25 Thread Chuck Short
Hi,

There is a gate for python-keystoneclient for py33. I have been trying to
port it over to python3 but its blocked on httpretty right now not being
pyhon3 compatible.

Regards
chuck


On Mon, Nov 25, 2013 at 3:25 PM, Flavio Percoco fla...@redhat.com wrote:

 Greetings,

 I noticed there's no Py3K gate for keysotneclient so, I just wanted to
 know what's the state of $subject and if there's any milestone for it.

 We're working on keeping Marconi's client Py3K compliant and this is,
 unfortunately, the only bit that's not supported. As for now, we
 disable keystone auth when running under Py3K.

 Cheers,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-25 Thread Devananda van der Veen
Hi!

Very good questions. I think most of them are directed towards the
Ceilometer team, but I have answered a few bits inline.


On Mon, Nov 25, 2013 at 7:24 AM, wanghaomeng wanghaom...@163.com wrote:


 Hello all:

 Basically, I understand the solution is - Our Ironic will implement an
 IPMI driver


We will need to add a new interface -- for example,
ironic.drivers.base.BaseDriver:sensor and the corresponding
ironic.drivers.base.SensorInterface class, then implement this interface as
ironic.drivers.modules.ipmitool:IPMISensor

We also need to define the methods this interface supports and what the
return data type is for each method. I imagine it may be something like:
- SensorInterface.list_available_sensors(node) returns a list of sensor
names for that node
- SensorInterface.get_measurements(node, list_of_sensor_names) returns a
dict of dicts, eg, { 'sensor_1': {'key': 'value'}, 'sensor_2': ...}


 (extendable framework for more drivers) to collect hardware sensor
 data(cpu temp, fan speed, volts, etc) via IPMI protocol from hardware
 server node, and emit the AMQP message to Ceilometer Collector,
 Ceilometer have the framework to handle the valid sample message and save
 to the database for data retrieving by consumer.

 Now, how do you think if we should clearly define the *interface  data
 model *specifications between Ironic and Ceilometer to enable IPMI data
 collecting, then our two team can start the coding together?


I think this is just a matter of understanding Ceilometer's API so that
Ironic can emit messages in the correct format. You've got many good
questions for the Ceilometer team on this below.



 And I still have some concern with our interface and data model as below,
 the spec need to be discussed and finalized:

 1. What is the Ceilometer sample data mandatory attributes, such as
 instance_id/tenant_id/user_id/resource_id, if they are not  optional, where
 are these data populated, from Ironic or Ceilomter side?

   *name/type/unit/volume/timestamp* - basic sample property, can be
 populated from Ironic side as data source
   *user_id/project_id/resource_id* - Ironic or Ceilometer populate these
 fields??
   *resource_metadata - this is for Ceilometer metadata query, Ironic know
 nothing for such resource metadata I think*
   *source *- can we hard-code as 'hardware' as a source identifier?


Ironic can cache the user_id and project_id of the instance. These will not
be present for unprovisioned nodes.

I'm not sure what resource_id is in this context, perhaps the nova
instance_uuid? If so, Ironic has that as well.


 2. Not sure if our Ceilometer only accept the *signed-message*, if it is
 case, how Ironic get the message trust for Ceilometer, and send the valid
 message which can be accepted by Ceilometer Collector?

 3. What is the Ceilometer sample data structure, and what is the min data
 item set for the IPMI message be emitted to Collector?
   *name/type/unit/volume/**timestamp/source - is this min data item set?*

 3. If the detailed data model should be defined for our IPMI data now?,
 what is our the first version scope, how many IPMI data type we should
 support? Here is a IPMI data sample list, I think we can support these as a
 min set.
   *Temperature - System Temp/CPU Temp*
 *  FAN Speed in rpm - FAN 1/2/3/4/A*
 *  Volts - Vcore/3.3VCC/12V/VDIMM/5VCC/-12V/VBAT/VSB/AVCC*


I think that's a good starting list. We can add more later.



 4. More specs - such as naming conversions, common constant reference
 definitions ...

 These are just a draft, not the spec, correct me if I am wrong
 understanding and add the missing aspects, we can discuss these interface
 and data model clearly I think.


 --
 *Haomeng*
 *Thanks:)*



Cheers,
Devananda




 At 2013-11-21 16:08:00,Ladislav Smola lsm...@redhat.com wrote:

 Responses inline.

 On 11/20/2013 07:14 PM, Devananda van der Veen wrote:

 Responses inline.

  On Wed, Nov 20, 2013 at 2:19 AM, Ladislav Smola lsm...@redhat.comwrote:

 Ok, I'll try to summarize what will be done in the near future for
 Undercloud monitoring.

 1. There will be Central agent running on the same host(hosts once the
 central agent horizontal scaling is finished) as Ironic


  Ironic is meant to be run with 1 conductor service. By i-2 milestone we
 should be able to do this, and running at least 2 conductors will be
 recommended. When will Ceilometer be able to run with multiple agents?


 Here it is described and tracked:
 https://blueprints.launchpad.net/ceilometer/+spec/central-agent-improvement


  On a side note, it is a bit confusing to call something a central
 agent if it is meant to be horizontally scaled. The ironic-conductor
 service has been designed to scale out in a similar way to nova-conductor;
 that is, there may be many of them in an AZ. I'm not sure that there is a
 need for Ceilometer's agent to scale in exactly a 1:1 relationship with
 ironic-conductor?


 Yeah we have 

Re: [openstack-dev] [nova] Should the server create API return 404 errors?

2013-11-25 Thread Shawn Hartsock
+1 on this sentiment.

From the perspective of the client, I typically imagine a web browser. A 404 
means that a thing was not found and this **implies** that I might specify a 
different thing so that I do *not* get a 404. But in many of the 
circumstances the 404 root cause is a server-side misconfiguration or some 
other fault I'll need to grab a different access mechanism to get at. That's 
decidedly *not* something I can do anything about from my hypothetical web 
browser.

# Shawn Hartsock


- Original Message -
 From: Matt Riedemann mrie...@linux.vnet.ibm.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, November 25, 2013 3:57:40 PM
 Subject: [openstack-dev] [nova] Should the server create API return 404   
 errors?
 
 Aaron Rosen is working a patch [1] to handle a NetworkNotFound exception
 in the server create API.  For the V2 API this will return a 400 error.
   For the V3 API this will return a 404 because of a V3-specific patch
 [2].  The API docs list 404 as a valid response code, but is it
 intuitive for a POST request like this?
 
 To muddy the waters more, ImageNotFound, FlavorNotFound and
 KeypairNotFound are translated to 400 errors in both the V2 and V3 APIs.
 
 So why should the network-specific NotFound exceptions be a 404 but the
 others aren't?
 
  From a programmatic perspective, I should validate that my request
 parameters are valid before calling the API in order to avoid a 404.
  From a user's perspective, a 404 seems strange - does it mean that the
 server I'm trying to create isn't found?  No, that's counter-intuitive.
 
 Ultimately I think we should be consistent, so if 404 is OK, then I
 think the V3 API should make ImageNotFound, FlavorNotFound and
 KeypairNotFound return a 404 also.
 
 Thoughts?
 
 [1]
 https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/54202/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=7rsVb0MXXnUjVL99Tv59k6aBr4vECecgzzAPtcPTm1s%3D%0As=04c6373178135ef5df18bd52ef74d80327b65d29029be1131297dd2fb48cae1e
 [2]
 https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/41863/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=7rsVb0MXXnUjVL99Tv59k6aBr4vECecgzzAPtcPTm1s%3D%0As=65c546e11e8ed9c82993d55d2316d3c981a5e7eed461cb98d04d26aa7a89ddcb
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=7rsVb0MXXnUjVL99Tv59k6aBr4vECecgzzAPtcPTm1s%3D%0As=2c79105a20f5ca91f32b0b2b82cd7d82a7b3c277480831371724b1674c9a3363
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient: uses deprecated keyring.backend.$keyring

2013-11-25 Thread Melanie Witt
On Nov 24, 2013, at 7:37 AM, Thomas Goirand z...@debian.org wrote:

 Someone sent a bug report against the python-novaclient package:
 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=728470
 
 Could someone take care of this?

FYI to the thread, this patch is now up for this issue: 
https://review.openstack.org/#/c/58364
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-25 Thread Isaku Yamahata
On Thu, Nov 21, 2013 at 12:34:33PM -0800,
Gary Duan gd...@varmour.com wrote:

 Advanced service plugins don't have two-step transition today. IMO, If
 vendor plugins/drivers don't maintain their own databases for these
 services, it might not be urgent to add these steps in the plugin.

I agree. OVS/linux bridge plugin (or mechanism driver) is in theory.
Unfortunately most of controller based plugin isn't. They update
neutron db and delegate the requests to the controller. This is common
pattern. No polling or something.
It is very fragile. For example restarting neutron-server during
processing requests easily causes inconsistency between neutron db and
controller side. Errors during delegation of requests tend to be ignored.

Do we want to address it to some extent by framework (ie ML2 plugin)?
or just leave it and declare that's the problem of plugin/mechanism driver?


 How to
 make sure database and back-end implementation in sync need more thought.
 As configuring backend device can be an a-sync process, rollback database
 tables can be cumbersome.

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hypervisor CI requirement and deprecation plan

2013-11-25 Thread Matt Riedemann



On 11/15/2013 9:28 AM, Dan Smith wrote:

Hi all,

As you know, Nova adopted a plan to require CI testing for all our
in-tree hypervisors by the Icehouse release. At the summit last week, we
determined the actual plan for deprecating non-compliant drivers. I put
together a page detailing the specific requirements we're putting in
place as well as a plan and timeline for how the deprecation process
will proceed:

https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan

I also listed the various drivers and whether we've heard any concrete
plans from them. Driver owners should feel free to add details to that
and correct any of the statements if incorrect.

Thanks!

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'll play devil's advocate here and ask this question before someone 
else does.  I'm assuming that the requirement of a 'full' tempest run 
means running this [1].  Is that correct?  It's just confusing sometimes 
because there are other things in Tempest that aren't in the 'full' run, 
like stress tests.


Assuming that's what 'full' means, it's running API, CLI, third party 
(boto), and scenario tests.  Does it make sense to require a nova virt 
driver's CI to run API tests for keystone, heat and swift?  Or couldn't 
the nova virt driver CI be scoped down to just the compute API tests? 
The argument against that is probably that the network/image/volume 
tests may create instances using nova to do their API testing also.  The 
same would apply for the CLI tests since those are broken down by 
service, i.e. why would I need to run keystone and ceilometer CLI tests 
for a nova virt driver?


If nothing else, I think we could firm up the wording on the wiki a bit 
around the requirements and what that means for scope.


[1] https://github.com/openstack/tempest/blob/master/tox.ini#L33

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home document for APIs (Was: Re: [Nova][Glance] Support of v1 and v2 glance APIs in Nova)

2013-11-25 Thread Jamie Lennox
To most of your questions i don't know the answer as the format was in place 
before i started with the project. I know that it is similar (though not 
exactly the same) as nova's but not where they are documented (as they are 
version independent)

I can tell you it looks like: 

{
  versions: {
values: [
  {
status: stable,
updated: 2013-03-06T00:00:00Z,
media-types: [
  {
base: application\/json,
type: application\/vnd.openstack.identity-v3+json
  },
  {
base: application\/xml,
type: application\/vnd.openstack.identity-v3+xml
  }
],
id: v3.0,
links: [
  {
href: http:\/\/localhost:5000\/v3\/,
rel: self
  }
]
  },
  {
status: stable,
updated: 2013-03-06T00:00:00Z,
media-types: [
  {
base: application\/json,
type: application\/vnd.openstack.identity-v2.0+json
  },
  {
base: application\/xml,
type: application\/vnd.openstack.identity-v2.0+xml
  }
],
id: v2.0,
links: [
  {
href: http:\/\/localhost:5000\/v2.0\/,
rel: self
  },
  {
href: 
http:\/\/docs.openstack.org\/api\/openstack-identity-service\/2.0\/content\/,
type: text\/html,
rel: describedby
  },
  {
href: 
http:\/\/docs.openstack.org\/api\/openstack-identity-service\/2.0\/identity-dev-guide-2.0.pdf,
type: application\/pdf,
rel: describedby
  }
]
  }
]
  }
}

- Original Message -
 From: Flavio Percoco fla...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, 25 November, 2013 6:41:42 PM
 Subject: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home document 
 for APIs (Was: Re: [Nova][Glance]
 Support of v1 and v2 glance APIs in Nova)
 
 On 25/11/13 09:28 +1000, Jamie Lennox wrote:
 So the way we have this in keystone at least is that querying GET / will
 return all available API versions and querying /v2.0 for example is a
 similar result with just the v2 endpoint. So you can hard pin a version
 by using the versioned URL.
 
 I spoke to somebody the other day about the discovery process in
 services. The long term goal should be that the service catalog contains
 unversioned endpoints and that all clients should do discovery. For
 keystone the review has been underway for a while now:
 https://review.openstack.org/#/c/38414/ the basics of this should be
 able to be moved into OSLO for other projects if required.
 
 Did you guys create your own 'home document' language? or did you base
 it on some existing format? Is it documented somewhere? IIRC, there's
 a thread where part of this was discussed, it was related to horizon.
 
 I'm curious to know what you guys did and if you knew about
 JSON-Home[0] when you started working on this.
 
 We used json-home for Marconi v1 and we'd want the client to work in a
 'follow your nose' way. Since, I'd prefer OpenStack modules to use the
 same language for this, I'm curious to know why - if so - you
 created your own spec, what are the benefits and if it's documented
 somewhere.
 
 Cheers,
 FF
 
 [0] http://tools.ietf.org/html/draft-nottingham-json-home-02
 
 --
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hypervisor CI requirement and deprecation plan

2013-11-25 Thread Russell Bryant
On 11/25/2013 05:19 PM, Matt Riedemann wrote:
 I'll play devil's advocate here and ask this question before someone
 else does.  I'm assuming that the requirement of a 'full' tempest run
 means running this [1].  Is that correct?  It's just confusing sometimes
 because there are other things in Tempest that aren't in the 'full' run,
 like stress tests.
 
 Assuming that's what 'full' means, it's running API, CLI, third party
 (boto), and scenario tests.  Does it make sense to require a nova virt
 driver's CI to run API tests for keystone, heat and swift?  Or couldn't
 the nova virt driver CI be scoped down to just the compute API tests?
 The argument against that is probably that the network/image/volume
 tests may create instances using nova to do their API testing also.  The
 same would apply for the CLI tests since those are broken down by
 service, i.e. why would I need to run keystone and ceilometer CLI tests
 for a nova virt driver?
 
 If nothing else, I think we could firm up the wording on the wiki a bit
 around the requirements and what that means for scope.
 
 [1] https://github.com/openstack/tempest/blob/master/tox.ini#L33
 

I think the short answer is, whatever we're running against all Nova
changes in the gate.

I expect that for some drivers, a more specific configuration is going
to be needed to exclude tests for features not implemented in that
driver.  That's fine.

Soon we also need to start solidifying criteria for what features *must*
be implemented in a driver.  I think we've let some drivers in with far
too many features not supported.  That's a separate issue from the CI
requirement, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home document for APIs (Was: Re: [Nova][Glance] Support of v1 and v2 glance APIs in Nova)

2013-11-25 Thread Dolph Mathews
On Mon, Nov 25, 2013 at 2:41 AM, Flavio Percoco fla...@redhat.com wrote:

 On 25/11/13 09:28 +1000, Jamie Lennox wrote:

 So the way we have this in keystone at least is that querying GET / will
 return all available API versions and querying /v2.0 for example is a
 similar result with just the v2 endpoint. So you can hard pin a version
 by using the versioned URL.

 I spoke to somebody the other day about the discovery process in
 services. The long term goal should be that the service catalog contains
 unversioned endpoints and that all clients should do discovery. For
 keystone the review has been underway for a while now:
 https://review.openstack.org/#/c/38414/ the basics of this should be
 able to be moved into OSLO for other projects if required.


 Did you guys create your own 'home document' language? or did you base
 it on some existing format? Is it documented somewhere? IIRC, there's
 a thread where part of this was discussed, it was related to horizon.

 I'm curious to know what you guys did and if you knew about
 JSON-Home[0] when you started working on this.


It looks like our multiple choice response might predate Nottingham's
proposal, but not by much. In keystone, it's been stable since I joined the
project, midway through the diablo cycle (summer 2011). I don't know any
more history than that, but I've CC'd Jorge Williams, who probably knows.

I really like Nottingham's approach of adding relational links from the
base endpoint, I've been thinking about doing the same for keystone for
quite a while.



 We used json-home for Marconi v1 and we'd want the client to work in a
 'follow your nose' way. Since, I'd prefer OpenStack modules to use the
 same language for this, I'm curious to know why - if so - you
 created your own spec, what are the benefits and if it's documented
 somewhere.


Then why didn't Marconi follow the lead of one of the other projects? ;)

I completely agree though - standardized version discovery across the
ecosystem would be fantastic.



 Cheers,
 FF

 [0] http://tools.ietf.org/html/draft-nottingham-json-home-02

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-25 Thread Tim Schnell
Hi Steve,

As one of the UI developers driving the requirements behind these new
blueprints I wanted to take a moment to assure you and the rest of the
Openstack community that the primary purpose of pushing these requirements
out to the community is to help improve the User Experience for Heat for
everyone. Every major UI feature that I have implemented for Heat has been
included in Horizon, see the Heat Topology, and these requirements should
improve the value of Heat, regardless of the UI.


Stack/template metadata
We have a fundamental need to have the ability to reference some
additional metadata about a template that Heat does not care about. There
are many possible use cases for this need but the primary point is that we
need a place in the template where we can iterate on the schema of the
metadata without going through a lengthy design review. As far as I know,
we are the only team attempting to actually productize Heat at the moment
and this means that we are encountering requirements and requests that do
not affect Heat directly but simply require Heat to allow a little wiggle
room to flesh out a great user experience.

There is precedence for an optional metadata section that can contain any
end-user data in other Openstack projects and it is necessary in order to
iterate quickly and provide value to Heat.

There are many use cases that can be discussed here, but I wanted to
reiterate an initial discussion point that, by definition,
stack/template_metadata does not have any hard requirements in terms of
schema or what does or does not belong in it.

One of the initial use cases is to allow template authors to categorize
the template as a specific type.

template_metadata:
short_description: Wordpress


This would let the client of the Heat API group the templates by type
which would create a better user experience when selecting or managing
templates. The the end-user could select Wordpress and drill down
further to select templates with different options, single node, 2 web
nodes, etc...

Once a feature has consistently proven that it adds value to Heat or
Horizon, then I would suggest that we can discuss the schema for that
feature and codify it then.

In order to keep the discussion simple, I am only responding to the need
for stack/template metadata at the moment but I'm sure discussions on the
management api and template catalog will follow.

Thanks,
Tim
@tims on irc






On 11/25/13 12:39 PM, Steven Hardy sha...@redhat.com wrote:

All,

So, lately we've been seeing more patches posted proposing added
functionality to Heat (the API and template syntax) related to
development of UI functionality.

This makes me both happy (because folks want to use Heat!) and sad
(because
it's evident there are several proprietary UI's being developed, rather
than collaboration focussed on making Horizon Heat functionality great)

One of the most contentious ones currently is that proposing adding
metadata to the HOT template specification, designed to contain data which
Heat does not use - my understanding is the primary use-case for this is
some UI for managing applictions via Heat:

https://review.openstack.org/#/c/56450/

I'd like to attempt to break down some of the communication barriers which
are increasingly apparent, and refocus our attention on what the actual
requirements are, and how they relate to improving Heat support in
Horizon.

The list of things being proposed I can think of are (I'm sure there are
more):
- Stack/template metadata
- Template repository/versioning
- Management-API functionality (for a Heat service administrator)
- Exposing build information

I think the template repository and template metadata items are closely
related - if we can figure out how we expect users to interact with
multiple versions of templates, access public template repositories,
attach
metadata/notes to their template etc in Horizon, then I think much of the
requirement driving those patches can be satisfied (without necessarily
implementing that functionality or storing that data in Heat).

For those who've already posted patches and got negative feedback, here's
my plea - please, please, start communicating the requirements and
use-cases, then we can discuss the solution together.  Just posting a
solution with no prior discussion and a minimal blueprint is a really slow
way to get your required functionality into Heat, and it's frustrating for
everyone :(

So, lets have a Heat-UI-requirements amnesty, what UI related
functionality
do you want in Heat, and why (requirements, use-case/user-story)

Hopefully if we can get these requirements out in the open, it will help
formulate a roadmap for future improvement of Heat support in Horizon.

Obviously non-Horizon UI users of Heat also directly benefit from this,
but
IMO we must focus the discussion primarily on what makes sense for
Horizon,
not what makes sense for $veiled_reference_to_internal_project, as the
wider community don't 

Re: [openstack-dev] [nova] Should the server create API return 404 errors?

2013-11-25 Thread Christopher Yeoh
On Tue, Nov 26, 2013 at 7:27 AM, Matt Riedemann
mrie...@linux.vnet.ibm.comwrote:

 Aaron Rosen is working a patch [1] to handle a NetworkNotFound exception
 in the server create API.  For the V2 API this will return a 400 error.
  For the V3 API this will return a 404 because of a V3-specific patch [2].
  The API docs list 404 as a valid response code, but is it intuitive for a
 POST request like this?


Yea I think in this case we've just got this wrong for the V3 API here.
It's validating a parameter, and although the client (glance/neutron etc)
may return a 404 to us, we should be returning a 400 (with a decent
message) to our client.


 To muddy the waters more, ImageNotFound, FlavorNotFound and
 KeypairNotFound are translated to 400 errors in both the V2 and V3 APIs.

 So why should the network-specific NotFound exceptions be a 404 but the
 others aren't?

 From a programmatic perspective, I should validate that my request
 parameters are valid before calling the API in order to avoid a 404. From a
 user's perspective, a 404 seems strange - does it mean that the server I'm
 trying to create isn't found?  No, that's counter-intuitive.

 Ultimately I think we should be consistent, so if 404 is OK, then I think
 the V3 API should make ImageNotFound, FlavorNotFound and KeypairNotFound
 return a 404 also.


At the very least we should be consistent across our API.

Chris.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should the server create API return 404 errors?

2013-11-25 Thread Aaron Rosen
On Mon, Nov 25, 2013 at 6:00 PM, Christopher Yeoh cbky...@gmail.com wrote:

 On Tue, Nov 26, 2013 at 7:27 AM, Matt Riedemann 
 mrie...@linux.vnet.ibm.com wrote:

 Aaron Rosen is working a patch [1] to handle a NetworkNotFound exception
 in the server create API.  For the V2 API this will return a 400 error.
  For the V3 API this will return a 404 because of a V3-specific patch [2].
  The API docs list 404 as a valid response code, but is it intuitive for a
 POST request like this?


 Yea I think in this case we've just got this wrong for the V3 API here.
 It's validating a parameter, and although the client (glance/neutron etc)
 may return a 404 to us, we should be returning a 400 (with a decent
 message) to our client.


I agree, I think 400 makes more sense than 404.  I'll go a head and make
this change for the v3 API's as well.



 To muddy the waters more, ImageNotFound, FlavorNotFound and
 KeypairNotFound are translated to 400 errors in both the V2 and V3 APIs.

 So why should the network-specific NotFound exceptions be a 404 but the
 others aren't?

 From a programmatic perspective, I should validate that my request
 parameters are valid before calling the API in order to avoid a 404. From a
 user's perspective, a 404 seems strange - does it mean that the server I'm
 trying to create isn't found?  No, that's counter-intuitive.

 Ultimately I think we should be consistent, so if 404 is OK, then I think
 the V3 API should make ImageNotFound, FlavorNotFound and KeypairNotFound
 return a 404 also.


 At the very least we should be consistent across our API.

Chris.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] summit wrap-up: Heat integration

2013-11-25 Thread Steve Baker
On 11/26/2013 06:21 AM, Sergey Lukjanov wrote:
 Hi guys,

 There was the Design Summit session in Hong Kong about Heat integration and 
 Savanna scalability [0]. We discussed some details about it, approved 
 integration plan and decided to use guest agents.

 First of all, or the Icehouse release cycle, we’ll implement resources 
 orchestration using Heat by creating YAML templates generator, blueprints 
 [1][2] and PoC [3]. It’ll be done by implementing extension mechanism for 
 provisioning w/o removing current orchestration solution to transparently 
 replace current code with the new Heat-based approach. As the first step all 
 resources (VMs, volumes, IPs) will be provisioned by Heat using template 
 generated by Savanna. Hadoop configuration will be done by Savanna and 
 especially by corresponding plugins.
 The second step of improving provisioning code will be to implement guest 
 agent for Savanna (we’re looking at unified agent [4][5] implementation due 
 to the growing amount of projects interested in it). Guest agents will allow 
 Savanna plugins to configure software by interacting with vendor-specific 
 management console APIs. The main goal of implementing agents in Savanna is 
 to get rid of direct ssh and http access to VMs.

 For the earlier J release cycle we’re planning to enable Heat by default and 
 then completely remove our current direct provisioning. We’ll contribute 
 Savanna resource to Heat, it’ll be something like “Data Processing Cluster” 
 or just “Hadoop Cluster” at the beginning, I’ll start discussion on it 
 separately.

 There are some problems that we’ll try to solve to support all current 
 Savanna features:

 * anti-affinity support (currently implemented using scheduler hints ‘not on 
 the specific hosts’ and stack provisioning is simultaneous in this case); 
 there are two possible solutions - use Nova’s Group API (when it’ll be ready) 
 or add support for it into the Heat; 
OS::Nova::Server has the scheduler_hints property, so you could always
continue with the current approach in the interim.
 * partially active stack and/or optional and mandatory resources; the easiest 
 way to explain this is to have an example - we provisioning 100 nodes with 
 same roles (data nodes of Hadoop cluster) and only one is down, so, we can 
 say that cluster is partially active and then rebuild failed nodes.
Some combination of our new autoscaling and stack convergence should
help here.
 To summarize, we’re currently finishing the PoC version of Heat-based 
 provisioning and we’ll merge it into the codebase soon.

 [0] https://etherpad.openstack.org/p/savanna-icehouse-architecture
 [1] 
 https://blueprints.launchpad.net/savanna/+spec/heat-backed-resources-provisioning
 [2] 
 https://blueprints.launchpad.net/savanna/+spec/infra-provisioning-extensions
 [3] https://review.openstack.org/#/c/55978
 [4] 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/018276.html
 [5] https://etherpad.openstack.org/p/UnifiedAgents


Nice, I've just added some comments to
https://review.openstack.org/#/c/55978/ . Feel free to add me as a
reviewer to any others.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should the server create API return 404 errors?

2013-11-25 Thread Christopher Yeoh
On Tue, Nov 26, 2013 at 9:41 AM, Aaron Rosen aaronoro...@gmail.com wrote:



 On Mon, Nov 25, 2013 at 6:00 PM, Christopher Yeoh cbky...@gmail.comwrote:

 On Tue, Nov 26, 2013 at 7:27 AM, Matt Riedemann 
 mrie...@linux.vnet.ibm.com wrote:

 Aaron Rosen is working a patch [1] to handle a NetworkNotFound exception
 in the server create API.  For the V2 API this will return a 400 error.
  For the V3 API this will return a 404 because of a V3-specific patch [2].
  The API docs list 404 as a valid response code, but is it intuitive for a
 POST request like this?


 Yea I think in this case we've just got this wrong for the V3 API here.
 It's validating a parameter, and although the client (glance/neutron etc)
 may return a 404 to us, we should be returning a 400 (with a decent
 message) to our client.


 I agree, I think 400 makes more sense than 404.  I'll go a head and make
 this change for the v3 API's as well.



Thanks for doing this!

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-25 Thread Clint Byrum
Excerpts from Tim Schnell's message of 2013-11-25 14:51:39 -0800:
 Hi Steve,
 
 As one of the UI developers driving the requirements behind these new
 blueprints I wanted to take a moment to assure you and the rest of the
 Openstack community that the primary purpose of pushing these requirements
 out to the community is to help improve the User Experience for Heat for
 everyone. Every major UI feature that I have implemented for Heat has been
 included in Horizon, see the Heat Topology, and these requirements should
 improve the value of Heat, regardless of the UI.
 
 
 Stack/template metadata
 We have a fundamental need to have the ability to reference some
 additional metadata about a template that Heat does not care about. There
 are many possible use cases for this need but the primary point is that we
 need a place in the template where we can iterate on the schema of the
 metadata without going through a lengthy design review. As far as I know,
 we are the only team attempting to actually productize Heat at the moment
 and this means that we are encountering requirements and requests that do
 not affect Heat directly but simply require Heat to allow a little wiggle
 room to flesh out a great user experience.
 

Wiggle room is indeed provided. But reviewers need to understand your
motivations, which is usually what blueprints are used for. If you're
getting push back, it is likely because your blueprints to not make the
use cases and long term vision obvious.

 There is precedence for an optional metadata section that can contain any
 end-user data in other Openstack projects and it is necessary in order to
 iterate quickly and provide value to Heat.
 

Nobody has said you can't have meta-data on stacks, which is what other
projects use.

 There are many use cases that can be discussed here, but I wanted to
 reiterate an initial discussion point that, by definition,
 stack/template_metadata does not have any hard requirements in terms of
 schema or what does or does not belong in it.
 
 One of the initial use cases is to allow template authors to categorize
 the template as a specific type.
 
 template_metadata:
 short_description: Wordpress
 
 

Interesting. Would you support adding a category keyword to python so
we don't have to put it in setup.cfg and so that the egg format doesn't
need that section? Pypi can just parse the python to categorize the apps
when they're uploaded. We could also have a file on disk for qcow2 images
that we upload to glance that will define the meta-data.

To be more direct, I don't think the templates themselves are where this
meta-data belongs. A template is self-aware by definition, it doesn't
need the global metadata section to tell it that it is WordPress. For
anything else that needs to be globally referenced there are parameters.
Having less defined inside the template means that you get _more_ wiggle
room for your template repository.

I 100% support having a template catalog. IMO it should be glance,
which is our catalog service in OpenStack. Who cares if nova or heat are
consuming images or templates. It is just sharable blobs of data and
meta-data in a highly scalable service. It already has the concept of
global and tenant-scope. It just needs an image type of 'hot' and then
heat can start consuming templates from glance. And the template authors
should maintain some packaging meta-data in glance to communicate to
users that this is Wordpress and Single-Node. If Glance's meta-data
is too limiting, expand it! I'm sure image authors and consumers would
appreciate that.

 This would let the client of the Heat API group the templates by type
 which would create a better user experience when selecting or managing
 templates. The the end-user could select Wordpress and drill down
 further to select templates with different options, single node, 2 web
 nodes, etc...
 

That is all api stuff, not language stuff.

 Once a feature has consistently proven that it adds value to Heat or
 Horizon, then I would suggest that we can discuss the schema for that
 feature and codify it then.
 
 In order to keep the discussion simple, I am only responding to the need
 for stack/template metadata at the moment but I'm sure discussions on the
 management api and template catalog will follow.
 

Your example puts the template catalog in front of this feature, and I
think that exposes this feature as misguided.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] New meeting time

2013-11-25 Thread Kurt Griffiths
OK, I¹ve changed the time. Starting next Monday (2 Dec.) we will be
meeting at 1500 UTC in #openstack-meeting-alt.

See also: https://wiki.openstack.org/wiki/Meetings/Marconi

On 11/25/13, 11:33 AM, Flavio Percoco fla...@redhat.com wrote:

On 25/11/13 17:05 +, Amit Gandhi wrote:
Works for me.

Works for me!

-- 
@flaper87
Flavio Percoco


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Icehouse mid-cycle meetup

2013-11-25 Thread Mike Wilson
Hotel information has been posted. Look forward to seeing you all in
February :-).

-Mike


On Mon, Nov 25, 2013 at 8:14 AM, Russell Bryant rbry...@redhat.com wrote:

 Greetings,

 Other groups have started doing mid-cycle meetups with success.  I've
 received significant interest in having one for Nova.  I'm now excited
 to announce some details.

 We will be holding a mid-cycle meetup for the compute program from
 February 10-12, 2014, in Orem, UT.  Huge thanks to Bluehost for hosting us!

 Details are being posted to the event wiki page [1].  If you plan to
 attend, please register.  Hotel recommendations with booking links will
 be posted soon.

 Please let me know if you have any questions.

 Thanks,

 [1] https://wiki.openstack.org/wiki/Nova/IcehouseCycleMeetup
 --
 Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hypervisor CI requirement and deprecation plan

2013-11-25 Thread Matt Riedemann



On Monday, November 25, 2013 4:37:29 PM, Russell Bryant wrote:

On 11/25/2013 05:19 PM, Matt Riedemann wrote:

I'll play devil's advocate here and ask this question before someone
else does.  I'm assuming that the requirement of a 'full' tempest run
means running this [1].  Is that correct?  It's just confusing sometimes
because there are other things in Tempest that aren't in the 'full' run,
like stress tests.

Assuming that's what 'full' means, it's running API, CLI, third party
(boto), and scenario tests.  Does it make sense to require a nova virt
driver's CI to run API tests for keystone, heat and swift?  Or couldn't
the nova virt driver CI be scoped down to just the compute API tests?
The argument against that is probably that the network/image/volume
tests may create instances using nova to do their API testing also.  The
same would apply for the CLI tests since those are broken down by
service, i.e. why would I need to run keystone and ceilometer CLI tests
for a nova virt driver?

If nothing else, I think we could firm up the wording on the wiki a bit
around the requirements and what that means for scope.

[1] https://github.com/openstack/tempest/blob/master/tox.ini#L33



I think the short answer is, whatever we're running against all Nova
changes in the gate.


Maybe a silly question, but is what is run against the check queue any 
different from the gate queue?




I expect that for some drivers, a more specific configuration is going
to be needed to exclude tests for features not implemented in that
driver.  That's fine.

Soon we also need to start solidifying criteria for what features *must*
be implemented in a driver.  I think we've let some drivers in with far
too many features not supported.  That's a separate issue from the CI
requirement, though.



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova SSL Apache2 Question

2013-11-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello Jesse,

I tried turning SSL on quantum and I am running into a problem. I have a 
compute node with nova running on it and everything else running on a 
controller node. When I change quantum to use its wsgi interface, I am getting 
an error from the quantum-server.log file:


Ø  [Wed Nov 27 14:08:56 2013] [debug] ssl_engine_kernel.c(1879): OpenSSL: Read: 
SSLv3 read client certificate A

Ø  [Wed Nov 27 14:08:56 2013] [debug] ssl_engine_kernel.c(1898): OpenSSL: Exit: 
failed in SSLv3 read client certificate A

Ø  [Wed Nov 27 14:08:56 2013] [info] [client 192.168.124.81] SSL library error 
1 in handshake (server 192.168.124.81:443)

Ø  [Wed Nov 27 14:08:56 2013] [info] SSL Library Error: 336151576 
error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca

What catches my eye is number 443. I have no idea where that is getting set. 
Nova is configured on the compute node to respond to  port 8774.

I am also getting an error in the nova/osapi.log file:

[Wed Nov 27 16:50:35 2013] [info] Initial (No.1) HTTPS request received for 
child 3 (server d00-50-56-8e-79-e7.cloudos.org:8774)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 ERROR 
nova.api.openstack [req-5183001e-8ca2-4f52-9c56-47ced4cf0570 
45c1e6999c0145348d889c5184e4cae5 bf916cad55494d548b4a3a5de78b87a6] Caught 
error: [Errno 1] _ssl.c:504: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack Traceback (most recent call last):
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 81, in 
__call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack return req.get_response(self.application)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/request.py, 
line 1296, in send
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack application, catch_exc_info=False)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/request.py, 
line 1260, in call_application
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack app_iter = application(self.environ, start_response)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 
144, in __call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack return resp(environ, start_response)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 450, in __call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack return self.app(env, start_response)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 
144, in __call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack return resp(environ, start_response)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 
144, in __call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack return resp(environ, start_response)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack response = self.app(environ, start_response)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 
144, in __call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack return resp(environ, start_response)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 
130, in __call__
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack resp = self.call_func(req, *args, **self.kwargs)
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 
195, in call_func
[Wed Nov 27 16:50:35 2013] [error] 2013-11-27 16:50:35.617 31236 TRACE 
nova.api.openstack return self.func(req, *args, **kwargs)
[Wed Nov 27 16:50:35 2013] [error] 

Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-25 Thread Fox, Kevin M
I agree that maybe an external file might be better suited to extra metadata. 
I've found it rare that you ever use just one template per stack. Usually it is 
a set of nested templates. This would allow for advanced ui features like an 
icon for the stack.

On the other hand, there is the template top level Description field, which 
I'm not sure is used by heat other then to tell something useful to the User 
UI. So, putting some metadata about the template in the template itself does 
not seem unprecedented.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Monday, November 25, 2013 3:46 PM
To: openstack-dev
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements   
roadmap

Excerpts from Tim Schnell's message of 2013-11-25 14:51:39 -0800:
 Hi Steve,

 As one of the UI developers driving the requirements behind these new
 blueprints I wanted to take a moment to assure you and the rest of the
 Openstack community that the primary purpose of pushing these requirements
 out to the community is to help improve the User Experience for Heat for
 everyone. Every major UI feature that I have implemented for Heat has been
 included in Horizon, see the Heat Topology, and these requirements should
 improve the value of Heat, regardless of the UI.


 Stack/template metadata
 We have a fundamental need to have the ability to reference some
 additional metadata about a template that Heat does not care about. There
 are many possible use cases for this need but the primary point is that we
 need a place in the template where we can iterate on the schema of the
 metadata without going through a lengthy design review. As far as I know,
 we are the only team attempting to actually productize Heat at the moment
 and this means that we are encountering requirements and requests that do
 not affect Heat directly but simply require Heat to allow a little wiggle
 room to flesh out a great user experience.


Wiggle room is indeed provided. But reviewers need to understand your
motivations, which is usually what blueprints are used for. If you're
getting push back, it is likely because your blueprints to not make the
use cases and long term vision obvious.

 There is precedence for an optional metadata section that can contain any
 end-user data in other Openstack projects and it is necessary in order to
 iterate quickly and provide value to Heat.


Nobody has said you can't have meta-data on stacks, which is what other
projects use.

 There are many use cases that can be discussed here, but I wanted to
 reiterate an initial discussion point that, by definition,
 stack/template_metadata does not have any hard requirements in terms of
 schema or what does or does not belong in it.

 One of the initial use cases is to allow template authors to categorize
 the template as a specific type.

 template_metadata:
 short_description: Wordpress



Interesting. Would you support adding a category keyword to python so
we don't have to put it in setup.cfg and so that the egg format doesn't
need that section? Pypi can just parse the python to categorize the apps
when they're uploaded. We could also have a file on disk for qcow2 images
that we upload to glance that will define the meta-data.

To be more direct, I don't think the templates themselves are where this
meta-data belongs. A template is self-aware by definition, it doesn't
need the global metadata section to tell it that it is WordPress. For
anything else that needs to be globally referenced there are parameters.
Having less defined inside the template means that you get _more_ wiggle
room for your template repository.

I 100% support having a template catalog. IMO it should be glance,
which is our catalog service in OpenStack. Who cares if nova or heat are
consuming images or templates. It is just sharable blobs of data and
meta-data in a highly scalable service. It already has the concept of
global and tenant-scope. It just needs an image type of 'hot' and then
heat can start consuming templates from glance. And the template authors
should maintain some packaging meta-data in glance to communicate to
users that this is Wordpress and Single-Node. If Glance's meta-data
is too limiting, expand it! I'm sure image authors and consumers would
appreciate that.

 This would let the client of the Heat API group the templates by type
 which would create a better user experience when selecting or managing
 templates. The the end-user could select Wordpress and drill down
 further to select templates with different options, single node, 2 web
 nodes, etc...


That is all api stuff, not language stuff.

 Once a feature has consistently proven that it adds value to Heat or
 Horizon, then I would suggest that we can discuss the schema for that
 feature and codify it then.

 In order to keep the discussion simple, I am only responding to the need
 for stack/template metadata at the moment but I'm sure discussions 

[openstack-dev] [Nova] PCI next step blue print

2013-11-25 Thread yongli he

Hi, John

you mentioned the summit discuss and pci next step work in this blue prints:
https://blueprints.launchpad.net/nova/+spec/pci-api-support

this bp provide basic API for what we already done: 
https://wiki.openstack.org/wiki/Pci-api-support


we also proposal another bp for the PCI next step, include whitelist API 
we discussed on summit:

https://blueprints.launchpad.net/nova/+spec/pci-extra-info

so we think first bp can be treated seperately, what do you think?


and we are setup some docs for user case and design:
pci next step design:
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support 
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support


user case discuss:
https://docs.google.com/document/d/1EMwDg9J8zOxzvTnQJ9HwZdiotaVstFWKIuKrPse6JOs/edit#heading=h.30de7p6sgoxp



Yongli He (Pauli He) @intel.com





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all project] Treating recently seen recheck bugs as critical across the board

2013-11-25 Thread Robert Collins
This has been mentioned in other threads, but I thought I'd call it
out and make it an explicit topic.

We have over 100 recheck bugs open on
http://status.openstack.org/rechecks/ - there is quite a bit of
variation in how frequently they are seen :(. In a way thats good, but
stuff that have been open for months and not seen are likely noise (in
/rechecks). The rest - the ones we see happening are noise in the
gate.

The lower we can drive the spurious failure rate, the less repetitive
analysing a failure will be, and the more obvious new ones will be -
it forms a virtuous circle.

However, many of these bugs - a random check of the first 5 listed
found /none/ that had been triaged - are no prioritised for fixing.

So my proposal is that we make it part of the base hygiene for a
project that any recheck bugs being seen (either by elastic-recheck or
manual inspection) be considered critical and prioritised above
feature work.

Thoughts?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-25 Thread Keith Bray


On 11/25/13 5:46 PM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Tim Schnell's message of 2013-11-25 14:51:39 -0800:
 Hi Steve,
 
 As one of the UI developers driving the requirements behind these new
 blueprints I wanted to take a moment to assure you and the rest of the
 Openstack community that the primary purpose of pushing these
requirements
 out to the community is to help improve the User Experience for Heat for
 everyone. Every major UI feature that I have implemented for Heat has
been
 included in Horizon, see the Heat Topology, and these requirements
should
 improve the value of Heat, regardless of the UI.
 
 
 Stack/template metadata
 We have a fundamental need to have the ability to reference some
 additional metadata about a template that Heat does not care about.
There
 are many possible use cases for this need but the primary point is that
we
 need a place in the template where we can iterate on the schema of the
 metadata without going through a lengthy design review. As far as I
know,
 we are the only team attempting to actually productize Heat at the
moment
 and this means that we are encountering requirements and requests that
do
 not affect Heat directly but simply require Heat to allow a little
wiggle
 room to flesh out a great user experience.
 

Wiggle room is indeed provided. But reviewers need to understand your
motivations, which is usually what blueprints are used for. If you're
getting push back, it is likely because your blueprints to not make the
use cases and long term vision obvious.

Clint, can you be more specific on what is not clear about the use case?
What I am seeing is that the use case of meta data is not what is being
contested, but that the Blueprint of where meta data should go is being
contested by only a few (but not all) of the core devs.  The Blueprint for
in-template metadata was already approved for Icehouse, but now that work
has been delivered on the implementation of that blueprint, the blueprint
itself is being contested:
   https://blueprints.launchpad.net/heat/+spec/namespace-stack-metadata
I'd like to propose that the blueprint that has been accepted go forth
with the code that exactly implements it, and if there are alternative
proposals and appropriate reasons for the community to come to consensus
on a different approach, that we then iterate and move the data (deprecate
the older feature if necessary, e.g. If that decision comes after
Icehouse, else of a different/better implementation comes before Icehouse,
then no harm done).



 There is precedence for an optional metadata section that can contain
any
 end-user data in other Openstack projects and it is necessary in order
to
 iterate quickly and provide value to Heat.
 

Nobody has said you can't have meta-data on stacks, which is what other
projects use.

 There are many use cases that can be discussed here, but I wanted to
 reiterate an initial discussion point that, by definition,
 stack/template_metadata does not have any hard requirements in terms
of
 schema or what does or does not belong in it.
 
 One of the initial use cases is to allow template authors to categorize
 the template as a specific type.
 
 template_metadata:
 short_description: Wordpress
 
 

Interesting. Would you support adding a category keyword to python so
we don't have to put it in setup.cfg and so that the egg format doesn't
need that section? Pypi can just parse the python to categorize the apps
when they're uploaded. We could also have a file on disk for qcow2 images
that we upload to glance that will define the meta-data.

To be more direct, I don't think the templates themselves are where this
meta-data belongs. A template is self-aware by definition, it doesn't
need the global metadata section to tell it that it is WordPress. For
anything else that needs to be globally referenced there are parameters.
Having less defined inside the template means that you get _more_ wiggle
room for your template repository.

Clint, you are correct that the Template does not need to know what it is.
 It's every other service (and users of those services) that a Template
passes through or to that would care to know what it is. We are suggesting
we put that meta data in the template file and expressly ignore it for
purposes of parsing the template language in the Heat engine, so we agree
it not a necessary part of the template.  Sure, we could encode the
metadata info in a separate catalog...  but, take the template out of the
catalog and now all that useful associated data is lost or would need to
be recreated by someone or some service.  That does not make the template
portable, and that is a key aspect of what we are trying to achieve (all
user-facing clients, like Horizon, or humans reading the file, can take
advantage). We don't entirely know yet what is most useful in portability
and what isn't, so meta data in-template provides the wiggle room
innovation space to suss that out.  We already know of 

  1   2   >