Re: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 11/11

2014-11-13 Thread Sylvain Bauza


Le 13/11/2014 06:09, Dugger, Donald D a écrit :


OK, as promised I created a Wiki page to keep track of our work 
items.  It’s linked to from the Gantt meeting page and is also 
available here:


https://wiki.openstack.org/wiki/Gantt/kilo

The important column in the Tasks table is the Patches column, that 
shows the specific patch sets that we are trying to push.  I want to 
concentrate on the specific patches rather than the BPs since some of 
the BPs (e.g. Add resource object models) can cover a dozen specific 
patch sets of which only 1 or 2 are specific to the scheduler.


Based upon the table we’re in pretty good shape, most of the line 
items have reviewable patches that we can look at right now.  The only 
holes right now are the first two rows, object related tasks, we only 
have BPs for them.  Also row 3, Detach service from compute node, is a 
spec right now that needs to be approved.




I just made some updates on the table with right progress on it. For 
info, Reviewing means that the implementation patches are there while 
Spec only only mean a spec is either merged or for review.


-Sylvain


--

Don Dugger

Censeo Toto nos in Kansa esse decisse. - D. Gale

Ph: 303/443-3786

*From:*Murray, Paul (HP Cloud) [mailto:pmur...@hp.com]
*Sent:* Tuesday, November 11, 2014 8:29 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [gantt] Scheduler sub-group meeting 
agenda 11/11


The resource tracker objects BP was grouped under the scheduler work 
items as well:


http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/make-resource-tracker-use-objects.html

-

1)Summit recap

2)Status of BPs:

a.Isolate scheduler DB aggregates - 
https://review.openstack.org/#/c/89893/


b.Isolate scheduler DB for instance groups - 
https://review.openstack.org/#/c/131553/


c.Detach service from compute node - 
https://review.openstack.org/#/c/126895/


d.Model resource objects - https://review.openstack.org/#/c/127609/

e.Model request spec object - https://review.openstack.org/#/c/127610/

f.Change select_destination() to use RequestSpec object - 
https://review.openstack.org/#/c/127612/


g.Convert virt/driver.py get_available_resources - 
https://blueprints.launchpad.net/nova/+spec/virt-driver-get-available-resources-object


--

Don Dugger

Censeo Toto nos in Kansa esse decisse. - D. Gale

Ph: 303/443-3786 tel:303%2F443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-13 Thread Joshua Harlow
Will be very interesting to see how this plays out.
FYI, yql (a thing yahoo uses a-lot of, internally and externally) has this kind 
of functionality built-in.
https://developer.yahoo.com/yql/guide/yql-execute-chapter.html
I am unaware of any python version of a javascript engine that has 
rate/execution limiting and such[2], so I'm not sure how feasible this is 
without switching languages (to java using rhino[3] or one of the firefox 
monkey engines or chromes engine...).

It would though be neat to expose javascript functions to heat so that people 
could extend heat where they felt they needed to and 'glue' together there 
various javascript functions into a larger orchestration template that heat 
manages (and/or use native heat functionality that exists). That would seem 
like a good mix of functionality for people to have (and enables more advanced 
users); although it does of course introduce a bunch of complexity (first being 
where does this javascript code get ran, where is it stored...).

[2] https://developer.yahoo.com/yql/guide/yql-execute-intro-ratelimits.html[3] 
http://en.wikipedia.org/wiki/Rhino_%28JavaScript_engine%29

-Josh

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org 
 Sent: Wednesday, November 12, 2014 10:00 AM
 Subject: Re: [openstack-dev] [Heat] Conditionals, was: New function: 
first_nonnull
   
Excerpts from Zane Bitter's message of 2014-11-12 08:42:44 -0800:
 On 12/11/14 10:10, Clint Byrum wrote:
  Excerpts from Zane Bitter's message of 2014-11-11 13:06:17 -0800:
  On 11/11/14 13:34, Ryan Brown wrote:
  I am strongly against allowing arbitrary Javascript functions for
  complexity reasons. It's already difficult enough to get meaningful
  errors when you  up your YAML syntax.
 
  Agreed, and FWIW literally everyone that Clint has pitched the JS idea
  to thought it was crazy ;)
 
 
  So far nobody has stepped up to defend me,
 
 I'll defend you, but I can't defend the idea :)
 
  so I'll accept that maybe
  people do think it is crazy. What I'm really confused by is why we have
  a new weird ugly language like YAQL (sorry, it, like JQ, is hideous),
 
 Agreed, and appealing to its similarity with Perl or PHP (or BASIC!) is 
 probably not the way to win over Python developers :D
 
  and that would somehow be less crazy than a well known mature language
  that has always been meant for embedding such as javascript.
 
 JS is a Turing-complete language, it's an entirely different kettle of 
 fish to a domain-specific language that is inherently safe to interpret 
 from user input. Sure, we can try to lock it down. It's a very tricky 
 job to get right. (Plus it requires a new external dependency of unknown 
 quality... honestly if you're going to embed a Turing-complete language, 
 Python is a much more obvious choice than JS.)
 

There's a key difference though. Python was never designed to be run
from untrusted sources. Javascript was _from the beginning_. There are
at least two independent javascript implementations which both have been
designed from the ground up to run code from websites in the local
interpreter. From the standpoint of Heat, it would be even easier to do
this.

Perhaps I can carve out some of that negative-1000-days of free time I
have and I can make it a resource plugin, with the properties being code
and references to other resources, and the attributes being the return.

  Anyway, I'd prefer YAQL over trying to get the intrinsic functions in
  HOT just right. Users will want to do things we don't expect. I say, let
  them, or large sections of the users will simply move on to something
  else.
 
 The other side of that argument is that users are doing one of two 
 things with data they have obtained from resources in the template:
 
 1) Passing data to software deployments
 2) Passing data to other resources
 
 In case (1) they can easily transform the data into whatever format they 
 want using their own scripts, running on their own server.
 
 In case (2), if it's not easy for them to just do what they want without 
 having to perform this kind of manipulation, we have failed to design 
 good resources. And if we give people the tools to just paper over the 
 problem, we'll never hear about it so we can correct it at the source, 
 just launch a thousand hard-to-maintain hacks into the world.
 

I for one would rather serve the users than ourselves, and preventing
them from papering over the problems so they have to whine at us is a
self-serving agenda.

As a primary whiner about Heat for a long time, I respect a lot that
this development team _bends over backwards_ to respond to user
requests. It's amazing that way.

However, I think to grow beyond open source savvy, deeply integrated
users like me, one has to let the users solve their own problems. They'll
know that their javascript or YAQL is debt sometimes, and they can
come to Heat's development community with suggestions like If you had
a coalesce 

[openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Murugan, Visnusaran
Hi all,

Convergence-POC distributes stack operations by sending resource actions over 
RPC for any heat-engine to execute. Entire stack lifecycle will be controlled 
by worker/observer notifications. This distributed model has its own advantages 
and disadvantages.

Any stack operation has a timeout and a single engine will be responsible for 
it. If that engine goes down, timeout is lost along with it. So a traditional 
way is for other engines to recreate timeout from scratch. Also a missed 
resource action notification will be detected only when stack operation timeout 
happens.

To overcome this, we will need the following capability:

1.   Resource timeout (can be used for retry)

2.   Recover from engine failure (loss of stack timeout, resource action 
notification)


Suggestion:

1.   Use task queue like celery to host timeouts for both stack and 
resource.

2.   Poll database for engine failures and restart timers/ retrigger 
resource retry (IMHO: This would be a traditional and weighs heavy)

3.   Migrate heat to use TaskFlow. (Too many code change)

I am not suggesting we use Task Flow. Using celery will have very minimum code 
change. (decorate appropriate functions)


Your thoughts.

-Vishnu
IRC: ckmvishnu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit

2014-11-13 Thread Flavio Percoco

On 12/11/14 15:22 -0500, Doug Hellmann wrote:

The oslo.messaging session at the summit [1] resulted in some plans to evolve 
how oslo.messaging works, but probably not during this cycle.

First, we talked about what to do about the various drivers like ZeroMQ and the 
new AMQP 1.0 driver. We decided that rather than moving those out of the main 
tree and packaging them separately, we would keep them all in the main 
repository to encourage the driver authors to help out with the core library 
(oslo.messaging is a critical component of OpenStack, and we’ve lost several of 
our core reviewers for the library to other priorities recently).

There is a new set of contributors interested in maintaining the ZeroMQ driver, 
and they are going to work together to review each other’s patches. We will 
re-evaluate keeping ZeroMQ at the end of Kilo, based on how things go this 
cycle.


I'd like to thank the folks that have stepped up for this driver. It's
great to see that there's some interest in cleaning it up and
maintaining it.

That said, if at the end of Kilo the zmq driver is still not in a
usable/maintainable mode, I'd like us to be more strict with the plans
forward for it. We asked for support in the last 3 summits with bad
results for the previous 2 releases.

I don't mean to sound rude and I do believe the folks that have
stepped up will do a great job. Still, I'd like us to learn from
previous experiences and have a better plan for this driver (and
future cases like this one).



We also talked about the fact that the new version of Kombu includes some of 
the features we have implemented in our own driver, like heartbeats and 
connection management. Kombu does not include the calling patterns 
(cast/call/notifications) that we have in oslo.messaging, but we may be able to 
remove some code from our driver and consolidate the qpid and rabbit driver 
code to let Kombu do more of the work for us.


This sounds great. Please, whoever is going to work on this, feel add
me to the reviews.


Python 3 support is coming slowly. There are a couple of patches up for review 
to provide a different sort of executor based on greenio and trollius. Adopting 
that would require some application-level changes to use co-routines, so it may 
not be an optimal solution even though it would get us off of eventlet. (During 
the Python 3 session later in the week we talked about the possibility of 
fixing eventlet’s monkey-patching to allow us to use the new eventlet under 
python 3.)

We also talked about the way the oslo.messaging API uses URLs to get some 
settings and configuration options for others. I thought I remembered this 
being a conscious decision to pass connection-specific parameters in the URL, 
and “global” parameters via configuration settings. It sounds like that split 
may not have been implemented as cleanly as originally intended, though. We 
identified documenting URL parameters as an issue for removing the 
configuration object, as well as backwards-compatibility. I don’t think we 
agreed on any specific changes to the API based on this part of the discussion, 
but please correct me if your recollection is different.


I prefer URL parameters to specify options. As of now, I think we
treat URL parameters and config options as two different things. Is
this something we can change and translate URL parameters to config
options?

I guess if we get to that point, we'd end up asking ourselves: Why
shouldn't we use just config options in that case?

I think one - historical (?) - answer to that is that we once thought
about not using oslo.config in oslo.messaging.

Looking forward to have more feedback on this point, I unfortunately
missed this session because I had to attend another one.
Flavio

--
@flaper87
Flavio Percoco


pgpkNq0wtkvFS.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Joshua Harlow
A question;

How is using something like celery in heat vs taskflow in heat (or at least 
concept [1]) 'to many code change'.

Both seem like change of similar levels ;-)

What was your metric for determining the code change either would have (out of 
curiosity)?

Perhaps u should look at [2], although I'm unclear on what the desired 
functionality is here.

Do u want the single engine to transfer its work to another engine when it 
'goes down'? If so then the jobboard model + zookeper inherently does this.

Or maybe u want something else? I'm probably confused because u seem to be 
asking for resource timeouts + recover from engine failure (which seems like a 
liveness issue and not a resource timeout one), those 2 things seem separable.

[1] http://docs.openstack.org/developer/taskflow/jobs.html

[2] 
http://docs.openstack.org/developer/taskflow/examples.html#jobboard-producer-consumer-simple

On Nov 13, 2014, at 12:29 AM, Murugan, Visnusaran visnusaran.muru...@hp.com 
wrote:

 Hi all,
  
 Convergence-POC distributes stack operations by sending resource actions over 
 RPC for any heat-engine to execute. Entire stack lifecycle will be controlled 
 by worker/observer notifications. This distributed model has its own 
 advantages and disadvantages.
  
 Any stack operation has a timeout and a single engine will be responsible for 
 it. If that engine goes down, timeout is lost along with it. So a traditional 
 way is for other engines to recreate timeout from scratch. Also a missed 
 resource action notification will be detected only when stack operation 
 timeout happens.
  
 To overcome this, we will need the following capability:
 1.   Resource timeout (can be used for retry)
 2.   Recover from engine failure (loss of stack timeout, resource action 
 notification)
  
  
 Suggestion:
 1.   Use task queue like celery to host timeouts for both stack and 
 resource.
 2.   Poll database for engine failures and restart timers/ retrigger 
 resource retry (IMHO: This would be a traditional and weighs heavy)
 3.   Migrate heat to use TaskFlow. (Too many code change)
  
 I am not suggesting we use Task Flow. Using celery will have very minimum 
 code change. (decorate appropriate functions)
  
  
 Your thoughts.
  
 -Vishnu
 IRC: ckmvishnu
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Matthias Runge mru...@redhat.com writes:

 On Wed, Nov 12, 2014 at 08:35:18AM -0500, Monty Taylor wrote:
 Just for the record, I believe that we should chose the tools that
 make sense for making our software, as long as it's not physically
 impossible for them to be packaged. This means we should absolutely
 not use things that require multiple versions of node to be needed.
 The nodejs that's in trusty is new enough to work with all of the
 modern javascript tool chain things needed for this, so other than
 the various javascript tools and libraries not being packaged in the
 distros yet, it should be fine.

 Agreed. We're in the position to describe or define, what we'd like to
 use or to see in the future. That may require us to create required
 tools.

 You're not concerned about node.js? Most probably, since you're not
 distributing it. Looking at the changelog, I'm a bit worried[1]:

 [...]

For better or for worse, the JavaScript community is using Node,
Grunt/Gulp, bower, ... as the default infrastructure tools.

Not using them or putting effort into creating alternatives would be
working against that community and I would say it's wasted effort. That
effort could be put to better use in the core OpenStack code.

I haven't cared much about Node itself, it's just a VM that runs my
JavaScript code. If I were to deploy it on a server I would agree that
the security and robustness becomes critical.

I find npm and bower alright -- they do their job just fine. The
semantic versioning craze is strange to me, but you can avoid it by
fully specifying the versions you depend on.

I find Grunt and Gulp to be overrated. My very simple Gruntfile[1] now
has about 170 lines of JSON to configure the simple tasks. For a copy
this to that task, the JSON format is fine, but for more complex tasks
with several steps it feels forced. I mean, I need to copy files in
several tasks, so I end up with

  copy: {
  some_target: {
  ...
  },
  other_target: {
  ...
  }
  }

in one part of the Gruntfile and then

  task: {
  some_target: {
  ...
  }
  }

many lines away. There's nothing to connect the two pieces of
configuration than a third task that runs both.

The whole multi-task idea also seems strange to me. It feels like an
idea that felt nice when the system was small and now the entire system
is built around it. As an example, running 'grunt copy' by itself is
useless when the two copy targets are small parts of bigger tasks.


About Gulp... I don't get it and I don't buy it :) The implication that
using streams will make your build fast is at best an
over-simplification. While streaming data is cool, elevating this single
idea to the fundamental building block in a simple task runner seems
contrieved.

It forces you into a pattern and you end up writing code looking like:

  gulp.src('./client/templates/*.jade')
.pipe(jade())
.pipe(gulp.dest('./build/templates'))
.pipe(minify())
.pipe(gulp.dest('./build/minified_templates'));

That might look cute for a minimal example like this, but I bet it'll
make things harder than they should be in the future. As in, how can I
inject more files into the stream conditionally? How do I read an
environment variable and inject some extra files? With normal JavaScript
I would know how to do this.

(Here I would apparently use gulp-if together with something called a
lazypipe and I would still need to somehow insert the extra files in the
stream.)

[1]: https://github.com/zerovm/swift-browser/blob/master/Gruntfile.js

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgp8QxY1uE7Ym.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term discovery

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing DISCOVERING
to INTROSPECTING in the new state machine spec is a good idea?
As before I'm uncertain. Discovery is a troublesome term, but too many 
people use and recognize it, while IMO introspecting is much less 
common. So count me as -0 on this.




On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
sandhya.ganapa...@hp.com mailto:sandhya.ganapa...@hp.com wrote:

Hi all,

Following the mail thread on disambiguating the term 'discovery' -

In the lines of what Devananda had stated, Hardware Introspection
also means retrieving and storing hardware details of the node whose
credentials and IP Address are known to the system. (Correct me if I
am wrong).

I am currently in the process of extracting hardware details (cpu,
memory etc..) of n no. of nodes belonging to a Chassis whose
credentials are already known to ironic. Does this process fall in
the category of hardware introspection?

Thanks,
Sandhya.

-Original Message-
From: Devananda van der Veen [mailto:devananda@gmail.com
mailto:devananda@gmail.com]
Sent: Tuesday, October 21, 2014 5:41 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] disambiguating the term discovery

Hi all,

I was reminded in the Ironic meeting today that the words hardware
discovery are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building
consensus in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words,
and some other words that I use to mean something else which is what
some people mean when they use those words. I'm not saying my words
are the right words -- they're just the words that make sense to my
brain right now. If someone else has better words, and those words
also make sense (or make more sense) then I'm happy to use those
instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

hardware discovery
The process or act of identifying hitherto unknown hardware, which
is addressable by the management system, in order to later make it
available for provisioning and management.

hardware introspection
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.


Why is this disambiguation important? At the last midcycle, we
agreed that hardware discovery is out of scope for Ironic --
finding new, unmanaged nodes and enrolling them with Ironic is best
left to other services or processes, at least for the forseeable future.

However, introspection is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for
many of our current users, and multiple proof of concept
implementations of this have been done by different parties over the
last year.

It may be entirely possible that no one else in our developer
community is using the term introspection in the way that I've
defined it above -- if so, that's fine, I can stop calling that
introspection, but I don't know a better word for the thing that
is find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for hardware discovery yields
several results related to identifying unknown network-connected
devices and adding them to inventory systems, which is the way that
I'm using the term right now, so I don't feel completely off in
continuing to say discovery when I mean find unknown network
devices and add them to Ironic.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Nikola Đipanov
On 11/13/2014 02:45 AM, Dan Smith wrote:
 I’m not sure if I’m seeing the second SELECT here either but I’m less
 familiar with what I’m looking at. compute_node_update() does the
 one SELECT as we said, then it doesn’t look like
 self._from_db_object() would emit any further SQL specific to that
 row.
 
 I don't think you're missing anything. I don't see anything in that
 object code, or the other db/sqlalchemy/api.py code that looks like a
 second select. Perhaps he was referring to two *queries*, being the
 initial select and the following update?
 

FWIW - I think an example Matt was giving me yesterday was block devices
where we have:

@require_context
def block_device_mapping_update(context, bdm_id, values, legacy=True):
_scrub_empty_str_values(values, ['volume_size'])
values = _from_legacy_values(values, legacy, allow_updates=True)
query =_block_device_mapping_get_query(context).filter_by(id=bdm_id)
query.update(values)
return query.first()

which gets called from object save()

N.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] alpha version numbering discussion from summit

2014-11-13 Thread Thierry Carrez
Doug Hellmann wrote:
 The outcome of the “Should Oslo continue to use alpha versions” session at 
 the summit [1] was unclear, so I would like to continue the discussion here.
 
 As we discussed at the summit, the primary reason for marking Oslo library 
 releases as alphas was to indicate that the library is under development and 
 not “stable”, so it should not be included in a deployment using stable 
 branches. 
 
 I think we were very close to being able to say that Oslo could stop using 
 Alpha versions for new library releases because we would pin the versions of 
 libraries used in the stable branches to MAJOR.MINOR+1 to only allow bug-fix 
 releases to appear in deployments using those branches. However, we will not 
 (and perhaps cannot) pin the versions of client libraries, and some of the 
 clients are now using oslo.utils and potentially other oslo libraries. This 
 would either break the clients (if they used a feature of an oslo library not 
 in the version of the library supported by the server) or the server (if the 
 oslo library is upgraded and a setuptools requirements check notices or some 
 feature has been removed from the oslo library).
 
 We came to this realization just as we were running out of time for the 
 session, so we did not come up with a solution. I wasn’t able to attend the 
 stable branch session, so I am hoping that someone who was there will be able 
 to explain a bit about the version pinning discussion and how that may, or 
 may not, affect Oslo library versioning.

The stable branch discussion happened before the Alpha versioning one.
In that discussion, we considered generally pinning dependencies for
stable branches, to reduce breakage there and make it more likely we
reach 15months+ of support.

That said, since we don't have stable branches for client libraries, we
didn't plan to pin those, so we might still need to bump the client
libraries dependencies in stable/* requirements as they evolve.
Technically, we wouldn't really be freezing the stable requirements, we
would just bump them on a as needed basis rather than automatically.

As far as the alpha versioning discussion goes, I think it could work,
as long as a released client library won't depend on an alpha of an oslo
library (which I think is a reasonable assumption). We would just need
to somehow remember to bump the oslo library in stable/* requirements
when the new client library depending on a new version of it is
released. Not sure how we can do that before the client library bump
breaks the branch, though ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 08:06 PM, Doug Hellmann wrote:

During our “Graduation Schedule” summit session we worked through the list of 
modules remaining the in the incubator. Our notes are in the etherpad [1], but as 
part of the Write it Down” theme for Oslo this cycle I am also posting a 
summary of the outcome here on the mailing list for wider distribution. Let me know 
if you remembered the outcome for any of these modules differently than what I have 
written below.

Doug



Deleted or deprecated modules:

funcutils.py - This was present only for python 2.6 support, but it is no 
longer used in the applications. We are keeping it in the stable/juno branch of 
the incubator, and removing it from master (https://review.openstack.org/130092)

hooks.py - This is not being used anywhere, so we are removing it. 
(https://review.openstack.org/#/c/125781/)

quota.py - A new quota management system is being created 
(https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
replace this, so we will keep it in the incubator for now but deprecate it.

crypto/utils.py - We agreed to mark this as deprecated and encourage the use of 
Barbican or cryptography.py (https://review.openstack.org/134020)

cache/ - Morgan is going to be working on a new oslo.cache library as a 
front-end for dogpile, so this is also deprecated 
(https://review.openstack.org/134021)

apiclient/ - With the SDK project picking up steam, we felt it was safe to 
deprecate this code as well (https://review.openstack.org/134024).

xmlutils.py - This module was used to provide a security fix for some XML 
modules that have since been updated directly. It was removed. 
(https://review.openstack.org/#/c/125021/)



Graduating:

oslo.context:
- Dims is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
- includes:
context.py

oslo.service:
- Sachi is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
- includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
By te way, right now I'm looking into updating this code to be able to 
run tasks on a thread pool, not only in one thread (quite a problem for 
Ironic). Does it somehow interfere with the graduation? Any deadlines or 
something?



request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

oslo.utils:
- We need to look into how to preserve the git history as we import these 
modules.
- includes:
fileutils.py
versionutils.py



Remaining untouched:

scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
whether Gantt has enough traction yet so we will hold onto these in the 
incubator for at least another cycle.

report/ - There’s interest in creating an oslo.reports library containing this 
code, but we haven’t had time to coordinate with Solly about doing that.



Other work:

We will continue the work on oslo.concurrency and oslo.log that we started 
during Juno.

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Jiri Tomasek jtoma...@redhat.com writes:

 Which tools should we use eventually:

 Based on the contributions by Maxime, Martin and the others, I think
 the list of tools should end up as follows:

 Tooling:
 npm
 bower
 gulp

While I find the design of Gulp strange, I'm sure it will do the job.
Someone said that the Angular teams is moving to it, so that is a +1 in
its favor.

 Jasmine
 Karma/Protractor(?)/eslint

I've used Protractor for my end-to-end tests and after getting to know
it, it works fine. It's my impression that it used to be annying to
getup selenium and actually writing this kind of tests -- with
Protractor you get up and running very quickly.

I don't have anything to compare it with, though, but it's the standard
for Angular development and that alone should be a strong hint.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgp0giFB1wybi.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit

2014-11-13 Thread Joshua Harlow
On Nov 13, 2014, at 12:38 AM, Flavio Percoco fla...@redhat.com wrote:

 On 12/11/14 15:22 -0500, Doug Hellmann wrote:
 The oslo.messaging session at the summit [1] resulted in some plans to 
 evolve how oslo.messaging works, but probably not during this cycle.
 
 First, we talked about what to do about the various drivers like ZeroMQ and 
 the new AMQP 1.0 driver. We decided that rather than moving those out of the 
 main tree and packaging them separately, we would keep them all in the main 
 repository to encourage the driver authors to help out with the core library 
 (oslo.messaging is a critical component of OpenStack, and we’ve lost several 
 of our core reviewers for the library to other priorities recently).
 
 There is a new set of contributors interested in maintaining the ZeroMQ 
 driver, and they are going to work together to review each other’s patches. 
 We will re-evaluate keeping ZeroMQ at the end of Kilo, based on how things 
 go this cycle.
 
 I'd like to thank the folks that have stepped up for this driver. It's
 great to see that there's some interest in cleaning it up and
 maintaining it.
 
 That said, if at the end of Kilo the zmq driver is still not in a
 usable/maintainable mode, I'd like us to be more strict with the plans
 forward for it. We asked for support in the last 3 summits with bad
 results for the previous 2 releases.
 
 I don't mean to sound rude and I do believe the folks that have
 stepped up will do a great job. Still, I'd like us to learn from
 previous experiences and have a better plan for this driver (and
 future cases like this one).
 
 
 We also talked about the fact that the new version of Kombu includes some of 
 the features we have implemented in our own driver, like heartbeats and 
 connection management. Kombu does not include the calling patterns 
 (cast/call/notifications) that we have in oslo.messaging, but we may be able 
 to remove some code from our driver and consolidate the qpid and rabbit 
 driver code to let Kombu do more of the work for us.
 
 This sounds great. Please, whoever is going to work on this, feel add
 me to the reviews.
 
 Python 3 support is coming slowly. There are a couple of patches up for 
 review to provide a different sort of executor based on greenio and 
 trollius. Adopting that would require some application-level changes to use 
 co-routines, so it may not be an optimal solution even though it would get 
 us off of eventlet. (During the Python 3 session later in the week we talked 
 about the possibility of fixing eventlet’s monkey-patching to allow us to 
 use the new eventlet under python 3.)
 
 We also talked about the way the oslo.messaging API uses URLs to get some 
 settings and configuration options for others. I thought I remembered this 
 being a conscious decision to pass connection-specific parameters in the 
 URL, and “global” parameters via configuration settings. It sounds like that 
 split may not have been implemented as cleanly as originally intended, 
 though. We identified documenting URL parameters as an issue for removing 
 the configuration object, as well as backwards-compatibility. I don’t think 
 we agreed on any specific changes to the API based on this part of the 
 discussion, but please correct me if your recollection is different.
 
 I prefer URL parameters to specify options. As of now, I think we
 treat URL parameters and config options as two different things. Is
 this something we can change and translate URL parameters to config
 options?

I'd rather go completely with config and have something like 
https://review.openstack.org/#/c/130047/ which allows for users that don't have 
a CLI accessible (aka from other libraries) to actually use oslo.messaging (for 
ex, taskflow). I believe url parameters could work, its just that config 
already provides typing (ints, bools, lists) and descriptions and urls have 
none of this (they also don't have a nested structure, aka grouping, which I 
believe some of oslo.messaging is using?).

 
 I guess if we get to that point, we'd end up asking ourselves: Why
 shouldn't we use just config options in that case?
 
 I think one - historical (?) - answer to that is that we once thought
 about not using oslo.config in oslo.messaging.

So would https://review.openstack.org/#/c/130047/ make that better?

A user that doesn't have access to oslo.config options would then just have to 
provide a dictionary of equivalent options. As described in that review (and 
pretty obvious) is that not everyone in the python world uses oslo.config 
options and therefore we need/must have a compatiblity layer for users to use 
if they so choose.

This could then look like the following when used: 
http://paste.ubuntu.com/8982657/

In all honesty I just want one way that works, if thats URLs or config (IMHO 
the only way config will actually work is if we have a interface in oslo.config 
like described in 130047 for people that want to use oslo.messaging that are 
not 

Re: [openstack-dev] [sahara] no IRC meeting Nov 6 and Nov 13

2014-11-13 Thread Sergey Lukjanov
Reminder

On Tuesday, November 4, 2014, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hey Sahara folks,

 just a friendly reminder that there will be no IRC meetings for Sahara on
 both Nov 6 and Nov 13, because of the summit and a lot of folks who'll be
 travelling / taking vacations after it.

 We'll pick up the normal meeting schedule for Sahara on Nov 20.

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIImpact flag for specs

2014-11-13 Thread Lucas Alvares Gomes
On Thu, Nov 13, 2014 at 4:45 AM, Angus Salkeld asalk...@mirantis.com wrote:
 On Sat, Nov 1, 2014 at 6:45 AM, Everett Toews everett.to...@rackspace.com
 wrote:

 Hi All,

 Chris Yeoh started the use of an APIImpact flag in commit messages for
 specs in Nova. It adds a requirement for an APIImpact flag in the commit
 message for a proposed spec if it proposes changes to the REST API. This
 will make it much easier for people such as the API Working Group who want
 to review API changes across OpenStack to find and review proposed API
 changes.

 For example, specifications with the APIImpact flag can be found with the
 following query:


 https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:apiimpact,n,z

 Chris also proposed a similar change to many other projects and I did the
 rest. Here’s the complete list if you’d like to review them.

 Barbican: https://review.openstack.org/131617
 Ceilometer: https://review.openstack.org/131618
 Cinder: https://review.openstack.org/131620
 Designate: https://review.openstack.org/131621
 Glance: https://review.openstack.org/131622
 Heat: https://review.openstack.org/132338
 Ironic: https://review.openstack.org/132340
 Keystone: https://review.openstack.org/132303
 Neutron: https://review.openstack.org/131623
 Nova: https://review.openstack.org/#/c/129757
 Sahara: https://review.openstack.org/132341
 Swift: https://review.openstack.org/132342
 Trove: https://review.openstack.org/132346
 Zaqar: https://review.openstack.org/132348

 There are even more projects in stackforge that could use a similar
 change. If you know of a project in stackforge that would benefit from using
 an APIImapct flag in its specs, please propose the change and let us know
 here.


 I seem to have missed this, I'll place my review comment here too.

 I like the general idea of getting more consistent/better API. But, is
 reviewing every spec across all projects just going to introduce a new non
 scalable bottle neck into our work flow (given the increasing move away from
 this approach: moving functional tests to projects, getting projects to do
 more of their own docs, etc..). Wouldn't a better approach be to have an API
 liaison in each project that can keep track of new guidelines and catch
 potential problems?

I thought that was what we decided in the Summit. So +1, that's a great idea.


 I see have added a new section here:
 https://wiki.openstack.org/wiki/CrossProjectLiaisons

 Isn't that enough?

Seems enough, at least to start with.

Lucas


 Regards
 Angus


 Thanks,
 Everett


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Murugan, Visnusaran
Hi,

Intension is not to transfer work load of a failed engine onto an active one. 
Convergence implementation that we are working on will be able to recover from 
a failure, provided a timeout notification hits heat-engine. All I want is a 
safe holding area for my timeout tasks. Timeout can be a stack timeout or a 
resource timeout.

By code change :) I meant posting to a job queue will be a matter of decorating 
timeout method and firing it for a delayed execution. Felt that we need not use 
taskflow just for posting a delayed execution(timer in our case).

Correct me if I'm wrong.

-Vishnu

From: Joshua Harlow [mailto:harlo...@outlook.com]
Sent: Thursday, November 13, 2014 2:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

A question;

How is using something like celery in heat vs taskflow in heat (or at least 
concept [1]) 'to many code change'.

Both seem like change of similar levels ;-)

What was your metric for determining the code change either would have (out of 
curiosity)?

Perhaps u should look at [2], although I'm unclear on what the desired 
functionality is here.

Do u want the single engine to transfer its work to another engine when it 
'goes down'? If so then the jobboard model + zookeper inherently does this.

Or maybe u want something else? I'm probably confused because u seem to be 
asking for resource timeouts + recover from engine failure (which seems like a 
liveness issue and not a resource timeout one), those 2 things seem separable.

[1] http://docs.openstack.org/developer/taskflow/jobs.html

[2] 
http://docs.openstack.org/developer/taskflow/examples.html#jobboard-producer-consumer-simple

On Nov 13, 2014, at 12:29 AM, Murugan, Visnusaran 
visnusaran.muru...@hp.commailto:visnusaran.muru...@hp.com wrote:


Hi all,

Convergence-POC distributes stack operations by sending resource actions over 
RPC for any heat-engine to execute. Entire stack lifecycle will be controlled 
by worker/observer notifications. This distributed model has its own advantages 
and disadvantages.

Any stack operation has a timeout and a single engine will be responsible for 
it. If that engine goes down, timeout is lost along with it. So a traditional 
way is for other engines to recreate timeout from scratch. Also a missed 
resource action notification will be detected only when stack operation timeout 
happens.

To overcome this, we will need the following capability:
1.   Resource timeout (can be used for retry)
2.   Recover from engine failure (loss of stack timeout, resource action 
notification)


Suggestion:
1.   Use task queue like celery to host timeouts for both stack and 
resource.
2.   Poll database for engine failures and restart timers/ retrigger 
resource retry (IMHO: This would be a traditional and weighs heavy)
3.   Migrate heat to use TaskFlow. (Too many code change)

I am not suggesting we use Task Flow. Using celery will have very minimum code 
change. (decorate appropriate functions)


Your thoughts.

-Vishnu
IRC: ckmvishnu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zaqar] Preparing the ground for the upcoming 6 months

2014-11-13 Thread Flavio Percoco

Greetings,

We had an amazing summit last week where we discussed tons of things
for the long and short future of the project. In order to reach those
big goals we set for the project, we need to tackle the short term
goals first.

Please, if you've specs that need to be written do it asap so that we
can review them and approve the ones that we need now. If you'd like
to provide feedback on the already proposed specs, you can do so by
going to this[0] link and reviewing the specs that are there.

[0] 
https://review.openstack.org/#/q/status:open+project:openstack/zaqar-specs,n,z

Thanks for joining our sessions last week,
Looking forward for the upcoming and already promissing 6 months,
Flavio

--
@flaper87
Flavio Percoco


pgprXb1RgMatt.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Policy file not reloaded after changes

2014-11-13 Thread Ajaya Agrawal
Hi All,

The policy file is not reloaded in glance after a change is made to it. You
need to restart glance to load the new policy file. I think all other
components reload the policy file after a change is made to it. Is it a bug
or intended behavior?

Cheers,
Ajaya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] agent_db module is_active VS is_agent_down

2014-11-13 Thread Gariganti, Sudhakar Babu
Hello Neutron folks,

I see that we have an agent property 'is_active' which internally uses the 
method is_agent_down() defined in AgentDbMixin to let us know if the agent is 
UP/DOWN, from the server point of view.
But I don't see any of the service plugins or schedulers leveraging this 
property, they are all directly using is_agent_down() itself. Except one 
occurrence in dhcp_rpc_agent_api module, is_agent_down() is being used wherever 
applies.

Should we get rid of is_active altogether or modify the existing calls for 
is_agent_down() to use is_active??

Thanks,
Sudhakar.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Matthew Booth
On 13/11/14 08:52, Nikola Đipanov wrote:
 On 11/13/2014 02:45 AM, Dan Smith wrote:
 I’m not sure if I’m seeing the second SELECT here either but I’m less
 familiar with what I’m looking at. compute_node_update() does the
 one SELECT as we said, then it doesn’t look like
 self._from_db_object() would emit any further SQL specific to that
 row.

 I don't think you're missing anything. I don't see anything in that
 object code, or the other db/sqlalchemy/api.py code that looks like a
 second select. Perhaps he was referring to two *queries*, being the
 initial select and the following update?

 
 FWIW - I think an example Matt was giving me yesterday was block devices
 where we have:
 
 @require_context
 def block_device_mapping_update(context, bdm_id, values, legacy=True):
 _scrub_empty_str_values(values, ['volume_size'])
 values = _from_legacy_values(values, legacy, allow_updates=True)
 query =_block_device_mapping_get_query(context).filter_by(id=bdm_id)
 query.update(values)
 return query.first()
 
 which gets called from object save()

Yes, this is one example, another is Aggregate. I already had a big list
in the post and didn't want a second one.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Matthew Booth
On 12/11/14 23:23, Mike Bayer wrote:
 
 On Nov 12, 2014, at 10:56 AM, Matthew Booth mbo...@redhat.com wrote:

 For brevity, I have conflated what happens in object.save() with what
 happens in db.api. Where the code lives isn't relevant here: I'm only
 looking at what happens.

 Specifically, the following objects refresh themselves on save:

 Aggregate
 BlockDeviceMapping
 ComputeNode
 
 Excluding irrelevant complexity, the general model for objects which
 refresh on update is:

 object = select row from object table
 object.update()
 object.save()
 return select row from object table again

 Some objects skip out the second select and return the freshly saved
 object. That is, a save involves an update + either 1 or 2 selects.
 
 If I may inquire as to the irrelevant complexity, I’m trying to pinpoint 
 where you see this happening.

The irrelevant complexity is mostly munging values before they are
inserted into the db. While this needs to be there, I don't think it's
important to the post.

Matt

-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Matthias Runge
On 12/11/14 18:23, Jiri Tomasek wrote:

 I see relation between Nodejs and js libs/tools and Angular app defining
 it's dependencies using NPM and Bower quite similar as Ruby, Rubygems
 and Rails application defining it's dependencies in Gemfile.lock.
 Rubygems are being packaged in distros, so why shouldn't node packages?

Some of them are already packaged by distros, and we have even
guidelines to do that:
https://fedoraproject.org/wiki/Packaging:Node.js

But then you'll be using yum/dnf/whatever instead of npm to install it.

Matthias


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] openstack-stable-maint list has been made read-only

2014-11-13 Thread Alan Pevec
2014-11-11 11:01 GMT+01:00 Alan Pevec ape...@gmail.com:
...
 All stable maintenance related discussion should happen on
 openstack-dev with [stable] tag in the subject.


openstack-stable-maint list is now configured to discard posts from
non-members and reject all posts from members with the followng
message:

 openstack-stable-maint list has been made read-only, explicit
Reply-To: header is set to
  openstack-dev@lists.openstack.org
 in list options.
 All stable branch maintenance related discussion should happen on
 openstack-dev list with [stable] tag in the subject.

Currently the only address allowed to post is jenk...@openstack.org
for periodic job failures.
Distros are encourage to apply their email address from where they
will post their 3rd party CI results on stable branches.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Matthew Booth
On 12/11/14 19:39, Mike Bayer wrote:
 
 On Nov 12, 2014, at 12:45 PM, Dan Smith d...@danplanet.com wrote:
 
 I personally favour having consistent behaviour across the board.
 How about updating them all to auto-refresh by default for
 consistency, but adding an additional option to save() to disable
 it for particular calls?
 
 I think these should be two patches: one to make them all
 auto-refresh, and another to make it conditional. That serves the
 purpose of (a) bisecting a regression to one or the other, and (b)
 we can bikeshed on the interface and appropriateness of the
 don't-refresh flag :)
 
 I also suggest a tactical fix to any object which fetches itself
 twice on update (e.g. Aggregate).
 
 I don't see that being anything other than an obvious win, unless
 there is some obscure reason for it. But yeah, seems like a good
 thing to do.
 
 lets keep in mind my everyone-likes-it-so-far proposal for reader()
 and writer(): https://review.openstack.org/#/c/125181/   (this is
 where it’s going to go as nobody has -1’ed it, so in absence of any
 “no way!” votes I have to assume this is what we’re going with).

FWIW, it got my +1, too. Looks great.

 in this system, the span of session use is implicit within the
 context and/or decorator, and when writer() is specified, a commit()
 can be implicit as well.  IMHO there should be no “.save()” at all,
 at least as far as database writing is concerned. SQLAlchemy
 doesn’t need boilerplate like that - just let the ORM work normally:
 
 @sql.writer def some_other_api_method(context): someobject =
 context.session.query(SomeObject)….one() 
 someobject.change_some_state(stuff)
 
 # done!
 
 if you want an explicit refresh, then just do so:
 
 @sql.writer def some_other_api_method(context): someobject =
 context.session.query(SomeObject)….one() 
 someobject.change_some_state(stuff)
 
 context.session.flush() context.session.refresh(someobject) # do
 something with someobject

Unfortunately this model doesn't apply to Nova objects, which are
persisted remotely. Unless I've missed something, SQLA doesn't run on
Nova Compute at all. Instead, when Nova Compute calls object.save() this
results in an RPC call to Nova Conductor, which persists the object in
the DB using SQLA. Compute wouldn't be able to use common DB
transactions without some hairy lifecycle management in Conductor, so
Compute apis need to be explicitly aware of this.

However, it absolutely makes sense for a single Conductor api call to
use a single transaction.

 however, seeing as this is all one API method the only reason you’d
 want to refresh() is that you think something has happened between
 that flush() and the refresh() that would actually show up, I can’t
 imagine what that would be looking for, unless maybe some large
 amount of operations took up a lot of time between the flush() and
 the refresh().

Given the above constraints, the problem I'm actually trying to solve is
when another process modifies an object underneath us between multiple,
remote transactions. This is one of the motivations for compare-and-swap
over row locking on read. Another is that the length of some API calls
makes holding a row lock for that long undesirable.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Matthew Booth
On 12/11/14 19:39, Mike Bayer wrote:
 lets keep in mind my everyone-likes-it-so-far proposal for reader()
 and writer(): https://review.openstack.org/#/c/125181/   (this is
 where it’s going to go as nobody has -1’ed it, so in absence of any
 “no way!” votes I have to assume this is what we’re going with).

Dan,

Note that this model, as I understand it, would conflict with storing
context in NovaObject.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][Cinder][all] Gently reminder of our commit guidelines

2014-11-13 Thread Flavio Percoco

Greetings,

Every once in a while we need to revisit our guidelines and even more
importantly we need to advocate for them and make sure the community,
especially core members, are on the same page with regards to those
guidelines.

I've seen poor commit messages lately, which don't clearly explain what
the problem is and what the fix is doing. I don't want to finger point
specific reviews but I'd like cores from the projects mentioned in the
subject to pay more attention to these details since I've spotted some
already-merged reviews that don't follow the guidelines below.

Here's our commit message guideline: 
https://wiki.openstack.org/wiki/GitCommitMessages

Thanks for all the hard work,
Flavio

--
@flaper87
Flavio Percoco


pgp3F148Kw5Pg.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term discovery

2014-11-13 Thread Ganapathy, Sandhya
Hi All,

Based on the discussions, I have filed a blue print that initiates discovery of 
node hardware details given its credentials at chassis level. I am in the 
process of creating a spec for it. Do share your thoughts regarding this - 

https://blueprints.launchpad.net/ironic/+spec/chassis-level-node-discovery

Thanks,
Sandhya.

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Thursday, November 13, 2014 2:20 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] disambiguating the term discovery

On 11/12/2014 10:47 PM, Victor Lowther wrote:
 Hmmm... with this thread in mind, anyone think that changing 
 DISCOVERING to INTROSPECTING in the new state machine spec is a good idea?
As before I'm uncertain. Discovery is a troublesome term, but too many people 
use and recognize it, while IMO introspecting is much less common. So count me 
as -0 on this.


 On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
 sandhya.ganapa...@hp.com mailto:sandhya.ganapa...@hp.com wrote:

 Hi all,

 Following the mail thread on disambiguating the term 'discovery' -

 In the lines of what Devananda had stated, Hardware Introspection
 also means retrieving and storing hardware details of the node whose
 credentials and IP Address are known to the system. (Correct me if I
 am wrong).

 I am currently in the process of extracting hardware details (cpu,
 memory etc..) of n no. of nodes belonging to a Chassis whose
 credentials are already known to ironic. Does this process fall in
 the category of hardware introspection?

 Thanks,
 Sandhya.

 -Original Message-
 From: Devananda van der Veen [mailto:devananda@gmail.com
 mailto:devananda@gmail.com]
 Sent: Tuesday, October 21, 2014 5:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Ironic] disambiguating the term discovery

 Hi all,

 I was reminded in the Ironic meeting today that the words hardware
 discovery are overloaded and used in different ways by different
 people. Since this is something we are going to talk about at the
 summit (again), I'd like to start the discussion by building
 consensus in the language that we're going to use.

 So, I'm starting this thread to explain how I use those two words,
 and some other words that I use to mean something else which is what
 some people mean when they use those words. I'm not saying my words
 are the right words -- they're just the words that make sense to my
 brain right now. If someone else has better words, and those words
 also make sense (or make more sense) then I'm happy to use those
 instead.

 So, here are rough definitions for the terms I've been using for the
 last six months to disambiguate this:

 hardware discovery
 The process or act of identifying hitherto unknown hardware, which
 is addressable by the management system, in order to later make it
 available for provisioning and management.

 hardware introspection
 The process or act of gathering information about the properties or
 capabilities of hardware already known by the management system.


 Why is this disambiguation important? At the last midcycle, we
 agreed that hardware discovery is out of scope for Ironic --
 finding new, unmanaged nodes and enrolling them with Ironic is best
 left to other services or processes, at least for the forseeable future.

 However, introspection is definitely within scope for Ironic. Even
 though we couldn't agree on the details during Juno, we are going to
 revisit this at the Kilo summit. This is an important feature for
 many of our current users, and multiple proof of concept
 implementations of this have been done by different parties over the
 last year.

 It may be entirely possible that no one else in our developer
 community is using the term introspection in the way that I've
 defined it above -- if so, that's fine, I can stop calling that
 introspection, but I don't know a better word for the thing that
 is find-unknown-hardware.

 Suggestions welcome,
 Devananda


 P.S.

 For what it's worth, googling for hardware discovery yields
 several results related to identifying unknown network-connected
 devices and adding them to inventory systems, which is the way that
 I'm using the term right now, so I don't feel completely off in
 continuing to say discovery when I mean find unknown network
 devices and add them to Ironic.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 

[openstack-dev] [stable] Organizational changes to support stable branches

2014-11-13 Thread Thierry Carrez
TL;DR:
Every project should designate a Stable branch liaison.

Hi everyone,

Last week at the summit we discussed evolving the governance around
stable branches, in order to maintain them more efficiently (and
hopefully for a longer time) in the future.

The current situation is the following: there is a single
stable-maint-core review team that reviews all backports for all
projects, making sure the stable rules are followed. This does not scale
that well, so we started adding project-specific people to the single
group, but they (rightfully) only care about one project. Things had to
change for Kilo. Here is what we came up with:

1. We propose that integrated projects with stable branches designate a
formal Stable Branch Liaison (by default, that would be the PTL, but I
strongly encourage someone specifically interested in stable branches to
step up). The Stable Branch Liaison is responsible for making sure
backports are proposed for critical issues in their project, and make
sure proposed backports are reviewed. They are also the contact point
for stable branch release managers around point release times.

2. We propose to set up project-specific review groups
($PROJECT-stable-core) which would be in charge of reviewing backports
for a given project, following the stable rules. Originally that group
should be the Stable Branch Liaison + stable-maint-core. The group is
managed by stable-maint-core, so that we make sure any addition is well
aware of the Stable Branch rules before they are added. The Stable
Branch Liaison should suggest names for addition to the group as needed.

3. The current stable-maint-core group would be reduced to stable branch
release managers and other active cross-project stable branch rules
custodians. We'll remove project-specific people and PTLs that were
added in the past. The new group would be responsible for granting
exceptions for all questionable backports raised by $PROJECT-stable-core
groups, providing backports reviews help everywhere, maintain the stable
branch rules (and make sure they are respected), and educate proposed
$PROJECT-stable-core members on the rules.

4. Each stable branch (stable/icehouse, stable/juno...) that we
concurrently support should have a champion. Stable Branch Champions are
tasked with championing a specific stable branch support, making sure
the branch stays in good shape and remains usable at all times. They
monitor periodic jobs failures and enlist the help of others in order to
fix the branches in case of breakage. They should also raise flags if
for some reason they are blocked and don't receive enough support, in
which case early abandon of the branch will be considered. Adam
Gandelman volunteered to be the stable/juno champion. Ihar Hrachyshka
(was) volunteered to be the stable/icehouse champion.

5. To set expectations right and evolve the meaning of stable over
time to gradually mean more not changing, we propose to introduce
support phases for stable branches. During the first 6 months of life of
a stable branch (Phase I) any significant bug may be backported. During
the next 6 months of life  of a stable branch (Phase II), only critical
issues and security fixes may be backported. After that and until end of
life (Phase III), only security fixes may be backported. That way, at
any given time, there is only one stable branch in Phase I support.

6. In order to raise awareness, all stable branch discussions will now
happen on the -dev list (with prefix [stable]). The
openstack-stable-maint list is now only used for periodic jobs reports,
and is otherwise read-only.

Let us know if you have any comment, otherwise we'll proceed to set
those new policies up.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-13 Thread Angus Salkeld
On Thu, Nov 13, 2014 at 4:00 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Zane Bitter's message of 2014-11-12 08:42:44 -0800:
  On 12/11/14 10:10, Clint Byrum wrote:
   Excerpts from Zane Bitter's message of 2014-11-11 13:06:17 -0800:
   On 11/11/14 13:34, Ryan Brown wrote:
   I am strongly against allowing arbitrary Javascript functions for
   complexity reasons. It's already difficult enough to get meaningful
   errors when you  up your YAML syntax.
  
   Agreed, and FWIW literally everyone that Clint has pitched the JS idea
   to thought it was crazy ;)
  
  
   So far nobody has stepped up to defend me,
 
  I'll defend you, but I can't defend the idea :)
 
   so I'll accept that maybe
   people do think it is crazy. What I'm really confused by is why we have
   a new weird ugly language like YAQL (sorry, it, like JQ, is hideous),
 
  Agreed, and appealing to its similarity with Perl or PHP (or BASIC!) is
  probably not the way to win over Python developers :D
 
   and that would somehow be less crazy than a well known mature language
   that has always been meant for embedding such as javascript.
 
  JS is a Turing-complete language, it's an entirely different kettle of
  fish to a domain-specific language that is inherently safe to interpret
  from user input. Sure, we can try to lock it down. It's a very tricky
  job to get right. (Plus it requires a new external dependency of unknown
  quality... honestly if you're going to embed a Turing-complete language,
  Python is a much more obvious choice than JS.)
 

 There's a key difference though. Python was never designed to be run
 from untrusted sources. Javascript was _from the beginning_. There are
 at least two independent javascript implementations which both have been
 designed from the ground up to run code from websites in the local
 interpreter. From the standpoint of Heat, it would be even easier to do
 this.

 Perhaps I can carve out some of that negative-1000-days of free time I
 have and I can make it a resource plugin, with the properties being code
 and references to other resources, and the attributes being the return.

   Anyway, I'd prefer YAQL over trying to get the intrinsic functions in
   HOT just right. Users will want to do things we don't expect. I say,
 let
   them, or large sections of the users will simply move on to something
   else.
 
  The other side of that argument is that users are doing one of two
  things with data they have obtained from resources in the template:
 
  1) Passing data to software deployments
  2) Passing data to other resources
 
  In case (1) they can easily transform the data into whatever format they
  want using their own scripts, running on their own server.
 
  In case (2), if it's not easy for them to just do what they want without
  having to perform this kind of manipulation, we have failed to design
  good resources. And if we give people the tools to just paper over the
  problem, we'll never hear about it so we can correct it at the source,
  just launch a thousand hard-to-maintain hacks into the world.
 


case (3) is trying to write a half useful template resource. With what we
have
this is very difficult. I think for non-trivial templates people very
quickly run
into the limitations of HOT.



 I for one would rather serve the users than ourselves, and preventing
 them from papering over the problems so they have to whine at us is a
 self-serving agenda.

 As a primary whiner about Heat for a long time, I respect a lot that
 this development team _bends over backwards_ to respond to user
 requests. It's amazing that way.

 However, I think to grow beyond open source savvy, deeply integrated
 users like me, one has to let the users solve their own problems. They'll
 know that their javascript or YAQL is debt sometimes, and they can
 come to Heat's development community with suggestions like If you had
 a coalesce function I wouldn't need to write it in javascript. But if
 you don't give them _something_, they'll just move on.


Agree, I think we need to get this done. We can't just keep ignoring users
when
they are begging for the same feature, because supposedly they are doing it
wrong.

-Angus



 Anyway, probably looking further down the road than I need to, but
 please keep an open mind for this idea, as users tend to use tools that
 solve their problem _and_ get out of their way in all other cases.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Angus Salkeld
On Thu, Nov 13, 2014 at 6:29 PM, Murugan, Visnusaran 
visnusaran.muru...@hp.com wrote:

  Hi all,



 Convergence-POC distributes stack operations by sending resource actions
 over RPC for any heat-engine to execute. Entire stack lifecycle will be
 controlled by worker/observer notifications. This distributed model has its
 own advantages and disadvantages.



 Any stack operation has a timeout and a single engine will be responsible
 for it. If that engine goes down, timeout is lost along with it. So a
 traditional way is for other engines to recreate timeout from scratch. Also
 a missed resource action notification will be detected only when stack
 operation timeout happens.



 To overcome this, we will need the following capability:

 1.   Resource timeout (can be used for retry)

We will shortly have a worker job, can't we have a job that just sleeps
that gets started in parallel with the job that is doing the work?
It gets to the end of the sleep and runs a check.

  2.   Recover from engine failure (loss of stack timeout, resource
 action notification)




My suggestion above could catch failures as long as it was run in a
different process.

-Angus




 Suggestion:

 1.   Use task queue like celery to host timeouts for both stack and
 resource.

 2.   Poll database for engine failures and restart timers/ retrigger
 resource retry (IMHO: This would be a traditional and weighs heavy)

 3.   Migrate heat to use TaskFlow. (Too many code change)



 I am not suggesting we use Task Flow. Using celery will have very minimum
 code change. (decorate appropriate functions)





 Your thoughts.



 -Vishnu

 IRC: ckmvishnu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term discovery

2014-11-13 Thread Lucas Alvares Gomes
Hi

On Thu, Nov 13, 2014 at 11:27 AM, Ganapathy, Sandhya
sandhya.ganapa...@hp.com wrote:
 Hi All,

 Based on the discussions, I have filed a blue print that initiates discovery 
 of node hardware details given its credentials at chassis level. I am in the 
 process of creating a spec for it. Do share your thoughts regarding this -

 https://blueprints.launchpad.net/ironic/+spec/chassis-level-node-discovery

Thanks Sandhya for the spec. But I prefer if people DO NOT share their
thoughts in this thread, it's out of topic. What we are trying to sort
out here is whether we should use the term discover for the approach
of finding out the physical characteristics of an already registered
Node in Ironic, or we should call it something else like
introspection or interrogation and leave the discover term only
for the approach of discovering nodes that are not registered in
Ironic yet.

Implementations details like your blueprint is suggesting whether make
it at Chassis level or Node level or both should go in another thread.


 Thanks,
 Sandhya.

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Thursday, November 13, 2014 2:20 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] disambiguating the term discovery

 On 11/12/2014 10:47 PM, Victor Lowther wrote:
 Hmmm... with this thread in mind, anyone think that changing
 DISCOVERING to INTROSPECTING in the new state machine spec is a good idea?
 As before I'm uncertain. Discovery is a troublesome term, but too many people 
 use and recognize it, while IMO introspecting is much less common. So count 
 me as -0 on this.


 On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
 sandhya.ganapa...@hp.com mailto:sandhya.ganapa...@hp.com wrote:

 Hi all,

 Following the mail thread on disambiguating the term 'discovery' -

 In the lines of what Devananda had stated, Hardware Introspection
 also means retrieving and storing hardware details of the node whose
 credentials and IP Address are known to the system. (Correct me if I
 am wrong).

 I am currently in the process of extracting hardware details (cpu,
 memory etc..) of n no. of nodes belonging to a Chassis whose
 credentials are already known to ironic. Does this process fall in
 the category of hardware introspection?

 Thanks,
 Sandhya.

 -Original Message-
 From: Devananda van der Veen [mailto:devananda@gmail.com
 mailto:devananda@gmail.com]
 Sent: Tuesday, October 21, 2014 5:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Ironic] disambiguating the term discovery

 Hi all,

 I was reminded in the Ironic meeting today that the words hardware
 discovery are overloaded and used in different ways by different
 people. Since this is something we are going to talk about at the
 summit (again), I'd like to start the discussion by building
 consensus in the language that we're going to use.

 So, I'm starting this thread to explain how I use those two words,
 and some other words that I use to mean something else which is what
 some people mean when they use those words. I'm not saying my words
 are the right words -- they're just the words that make sense to my
 brain right now. If someone else has better words, and those words
 also make sense (or make more sense) then I'm happy to use those
 instead.

 So, here are rough definitions for the terms I've been using for the
 last six months to disambiguate this:

 hardware discovery
 The process or act of identifying hitherto unknown hardware, which
 is addressable by the management system, in order to later make it
 available for provisioning and management.

 hardware introspection
 The process or act of gathering information about the properties or
 capabilities of hardware already known by the management system.


 Why is this disambiguation important? At the last midcycle, we
 agreed that hardware discovery is out of scope for Ironic --
 finding new, unmanaged nodes and enrolling them with Ironic is best
 left to other services or processes, at least for the forseeable future.

 However, introspection is definitely within scope for Ironic. Even
 though we couldn't agree on the details during Juno, we are going to
 revisit this at the Kilo summit. This is an important feature for
 many of our current users, and multiple proof of concept
 implementations of this have been done by different parties over the
 last year.

 It may be entirely possible that no one else in our developer
 community is using the term introspection in the way that I've
 defined it above -- if so, that's fine, I can stop calling that
 introspection, but I don't know a better word for the thing that
 is find-unknown-hardware.

 Suggestions welcome,
 Devananda


 P.S.


Re: [openstack-dev] [all] HA cross project session summary and next steps

2014-11-13 Thread Angus Salkeld
On Tue, Nov 11, 2014 at 12:13 PM, Angus Salkeld asalk...@mirantis.com
wrote:

 Hi all

 The HA session was really well attended and I'd like to give some feedback
 from the session.

 Firstly there is some really good content here:
 https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

 1. We SHOULD provide better health checks for OCF resources (
 http://linux-ha.org/wiki/OCF_Resource_Agents).
 These should be fast and reliable. We should probably bike shed on some
 convention like project-manage healthcheck
 and then roll this out for each project.

 2. We should really move
 https://github.com/madkiss/openstack-resource-agents to stackforge or
 openstack if the author is agreeable to it (it's referred to in our
 official docs).


I have chatted to the author of this repo and he is happy for it to live
under stackforge or openstack. Or each OCF resource going into each of the
projects.
Does anyone have any particular preference? I suspect stackforge will be
the path of least resistance.

-Angus


 3. All services SHOULD support Active/Active configurations
 (better scaling and it's always tested)

 4. We should be testing HA (there are a number of ideas on the etherpad
 about this)

 5. Many services do not recovery in the case of failure mid-task
 This seems like a big problem to me (some leave the DB in a mess).
 Someone linked to an interesting article (
 crash-only-software: http://lwn.net/Articles/191059/)
 http://lwn.net/Articles/191059/ that suggests that we if we do this
 correctly we should not need the concept of clean shutdown.
  (
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L459-L471
 )
  I'd be interested in how people think this needs to be approached
 (just raise bugs for each?).

 Regards
 Angus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Radomir Dopieralski
On 11/11/14 08:02, Richard Jones wrote:

[...]

 There were some discussions around tooling. We're using xstatic to
 manage 3rd party components, but there's a lot missing from that
 environment. I hesitate to add supporting xstatic components on to the
 already large pile of work we have to do, so would recommend we switch
 to managing those components with bower instead. For reference the list
 of 3rd party components I used in angboard* (which is really only a
 teensy fraction of the total application we'd end up with, so this
 components list is probably reduced):

[...]

 Just looking at PyPI, it looks like only a few of those are in xstatic,
 and those are out of date.

There is a very good reason why we only have a few external JavaScript
libraries, and why they are in those versions.

You see, we are not developing Horizon for our own enjoyment, or to
install it at our own webserver and be done with it. What we write has
to be then packaged for different Linux distributions by the packagers.
Those packagers have very little wiggle room with respect to how they
can package it all, and what they can include.

In particular, libraries should get packaged separately, so that they
can upgrade them and apply security patches and so on. Before we used
xstatic, they have to go through the sources of Horizon file by file,
and replace all of our bundled files with symlinks to what is provided
in their distribution. Obviously that was laborious and introduced bugs
when the versions of libraries didn't match.

So now we have the xstatic system. That means, that the libraries are
explicitly listed, with their minimum and maximum version numbers, and
it's easy to make a dummy xstatic package that just points at some
other location of the static files. This simplifies the work of the
packagers.

But the real advantage of using the xstatic packages is that in order to
add them to Horizon, you need to add them to the global-requirements
list, which is being watched and approved by the packagers themselves.
That means, that when you try to introduce a new library, or a version
of an old library, that is for some reason problematic for any of the
distributions (due to licensing issues, due to them needing to remain at
an older version, etc.), they get to veto it and you have a chance of
resolving the problem early, not dropping it at the last moment on the
packagers.

Going back to the versions of the xstatic packages that we use, they are
so old for a reason. Those are the newest versions that are available
with reasonable effort in the distributions for which we make Horizon.

If you want to replace this system with anything else, please keep in
contact with the packagers to make sure that the resulting process makes
sense and is acceptable for them.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] python-troveclient keystone v3 support breaking the world

2014-11-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/11/14 15:17, Sean Dague wrote:
 
 1) just delete the trove exercise so we can move forward - 
 https://review.openstack.org/#/c/133930 - that will need to be 
 backported as well.

The patch is merged. Do we still need to backport it baring in mind
that client revert [1] was merged? I guess no, but better check.

Also, since trove client is back in shape, should we revert your
devstack patch?

[1]: https://review.openstack.org/#/c/133958/

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUZKC1AAoJEC5aWaUY1u57PC8IAKZSqHTNtBOUBgB8VwzUpS4Q
UQZNJI8jfGjl6Eqd/H2BRvobSjKMKUnhriY4rbz0PhMUWTHrxwZFmi7i4mq5VhYq
y5yv70JijVatlXjCuFps8coqvOdseprB2IugX5LJ3/4edfs1xbJ3hJti/35Iklxd
9J8FM41Whx8t62jfSmNWTah1Y+GVMDwvnwFMDqjUpzwHW1bPHgoumSh5ZwlG5hwl
3BKTNCjnYY0b6yswUKSvDafPWGSNNlQT2ZQuVTBmFq65UC5mq3SWMhZ7ikLEca2K
o9gMzG5iQxpviKzHFMgZj7ZCGpjcE58GI/8D2e4btlST6blvvjqWYl1VyzJahDM=
=s/Hp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Lucas Alvares Gomes
This was discussed in the Contributor Meetup on Friday at the Summit
but I think it's important to share on the mail list too so we can get
more opnions/suggestions/comments about it.

In the Ironic weekly meeting we dedicate a good time of the meeting to
do some announcements, reporting bug status, CI status, oslo status,
specific drivers status, etc... It's all good information, but I
believe that the mail list would be a better place to report it and
then we can free some time from our meeting to actually discuss
things.

Are you guys in favor of it?

If so I'd like to propose a new format based on the discussions we had
in Paris. For the people doing the status report on the meeting, they
would start adding the status to an etherpad and then we would have a
responsible person to get this information and send it to the mail
list once a week.

For the meeting itself we have a wiki page with an agenda[1] which
everyone can edit to put the topic they want to discuss in the meeting
there, I think that's fine and works. The only change about it would
be that we may want freeze the agenda 2 days before the meeting so
people can take a look at the topics that will be discussed and
prepare for it; With that we can move forward quicker with the
discussions because people will be familiar with the topics already.

Let me know what you guys think.

[1] https://wiki.openstack.org/wiki/Meetings/Ironic

Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Cinder] Baremetal volumes -- how to model direct attached storage

2014-11-13 Thread Duncan Thomas
The problem with considering it a cinder volume rather than a nova
ephemeral volume is that it is just as leaky a set of semantics -
cinder volumes can be detached, attached elsewhere, snapshotted,
backed up, etc - a directly connected bare metal drive will be able to
do none of these things.

That said, the upcoming cinder-agent code might be of use - it is
designed to provide discovery and an API around local storage - but
mapping bare metal drives as cinder volumes is really no better than
mapping them as nova ephemeral drives - in both cases they don't match
the semantics. I'd rather not bend the cinder semantics out of shape
to clean up the nova ones.



On 13 November 2014 00:30, Clint Byrum cl...@fewbar.com wrote:
 Each summit since we created preserve ephemeral mode in Nova, I have
 some conversations where at least one person's brain breaks for a
 second. There isn't always alcohol involved before, there almost
 certainly is always a drink needed after. The very term is vexing, and I
 think we have done ourselves a disservice to have it, even if it was the
 best option at the time.

 To be clear, in TripleO, we need a way to keep the data on a local
 direct attached storage device while deploying a new image to the box.
 If we were on VMs, we'd attach volumes, and just deploy new VMs and move
 the volume over. If we had a SAN, we'd just move the LUN's. But at some
 point when you deploy a cloud you're holding data that is expensive to
 replicate all at once, and so you'd rather just keep using the same
 server instead of trying to move the data.

 Since we don't have baremetal Cinder, we had to come up with a way to
 do this, so we used Nova rebuild, and slipped it a special command that
 said don't overwrite the partition you'd normally make the 'ephemeral'
 partition. This works fine, but it is confusing and limiting. We'd like
 something better.

 I had an interesting discussion with Devananda in which he suggested an
 alternative approach. If we were to bring up cinder-volume on our deploy
 ramdisks, and configure it in such a way that it claimed ownership of
 the section of disk we'd like to preserve, then we could allocate that
 storage as a volume. From there, we could boot from volume, or attach
 the volume to the instance (which would really just tell us how to find
 the volume). When we want to write a new image, we can just delete the old
 instance and create a new one, scheduled to wherever that volume already
 is. This would require the nova scheduler to have a filter available
 where we could select a host by the volumes it has, so we can make sure to
 send the instance request back to the box that still has all of the data.

 Alternatively we can keep on using rebuild, but let the volume model the
 preservation rather than our special case.

 Thoughts? Suggestions? I feel like this might take some time, but it is
 necessary to consider it now so we can drive any work we need to get it
 done soon.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] L2 gateway as a service

2014-11-13 Thread Kamat, Maruti Haridas
Hi Friends,

 As discussed during the summit, I have uploaded the spec for review at 
https://review.openstack.org/#/c/134179/https://review.openstack.org/

Thanks,
Maruti



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] VMware networking support

2014-11-13 Thread Gary Kotton
Hi,
A few months back we started to work on a umbrella spec for Vmware networking 
support (https://review.openstack.org/#/c/105369). There are a number of 
different proposals for a number of different use cases. In addition to 
providing one another with an update of our progress we need to discuss the 
following challenges:

  *   At the summit there was talk about splitting out vendor code from the 
neutron code base. The aforementioned specs are not being approved until we 
have decided what we as a community want/need. We need to understand how we can 
continue our efforts and not be blocked or hindered by this debate.
  *   CI updates - in order to provide a new plugin we are required to provide 
CI (yes, this is written in stone and in some cases marble)
  *   Additional support may be required in the following:
 *   Nova - for example Neutron may be exposing extensions or functionality 
that requires Nova integrations
 *   Devstack - In order to get CI up and running we need devatck support

As a step forwards I would like to suggest that we meeting at #openstack-vmware 
channel on Tuesday at 15:00 UTC. Is this ok with everyone?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Doc] Bug Triage Day - Nov. 20

2014-11-13 Thread Anne Gentle
Hi all,
To follow in the footsteps of the inimitable Infra team, we want to start
monthly bug triaging days. Thursdays seem to be good days for our Aussie
counterparts and follow the milestone pattern. So I'd like to propose our
first for Nov 20.

The idea is not to fix bugs, but triage them for many people to pick up
easily, even as a first contribution. In the comments, let people know
which file needs updated and even suggest wording.

To triage a doc bug in openstack-manuals[1] or openstack-api-site [2],
follow the docs in our HowTo page [3].

Here are some definitions for Status and Importance so you can triage
incoming doc bugs.

Status:

New - Recently logged by a non-triaging person
Incomplete - Needs additional information before it can be triaged
Opinion - (not sure what to do with this one)
Invalid - Not an issue for docs
Won't Fix - Doc fixes won't fix the issue
Confirmed - Acknowledged that it's a doc bug
Triaged - Comments in the bug indicate its scope and amount of work to be
done
In Progress - Someone is working on it
Fix Committed - A fix is in the repository; Gerrit sets this automatically.
Don't set this manually.
Fix Released - A fix is published to the site.

Importance:

Critical - data will be lost if this bug stays in; or it's so bad that
we're better off fixing it than dealing with all the incoming questions
about it. Also items on the website itself that prevent access are Critical
doc bugs.
High - Definitely need docs about this or a fix to current docs; docs are
incomplete without this. Work on these first if possible.
Medium - Need docs about this within a six-month release timeframe.
Low - Docs are fine without this but could be enhanced by fixing this bug.
Wishlist - Would like this doc task done some day Would prefer to use
this for tasks instead of bugs - mark a bug as low rather than putting
it on the wishlist. When something is wrong with the doc, mark it as Low
rather than Wishlist.
Undecided - Recently logged by a non-triaging person or requires more
research before deciding its importance.

Let's set a targeted number of triaged bugs at 30 as our goal. (Up to 60
would be amazing.)

I'll mark the day on the OpenStack calendar once we agree Thursday is good.
Thanks,
Anne

1. http://bugs.launchpad.net/openstack-manuals
2. http://bugs.launchpad.net/openstack-api-site
3.
https://wiki.openstack.org/wiki/Documentation/HowTo#Doc_Bug_Triaging_Guidelines
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Radomir Dopieralski
On 13/11/14 08:23, Matthias Runge wrote:

[...]

 Since we don't require node.js on the server (yet), but only for
 the development process: did anyone look at node's competitors? Like
 CommonJS, Rhino, or SpiderMonkey?

When we were struggling with adding jslint to our CI, we did try a
number of different alternatives to node.js, like Rhino, SpiderMonkey,
V8, phantomjs, etc.

The conclusion was that even tools that advertised themselves as working
on Rhino dropped their support for it several years ago, and just didn't
update the documentation. Node seems to be the only thing that works
without having to modify the code of those tools.

Of course things might have changed since, or we may have someone with
better JavaScript hacking skills who would manage to make it work. But
last year we failed.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] alpha version numbering discussion from summit

2014-11-13 Thread Doug Hellmann

On Nov 13, 2014, at 3:52 AM, Thierry Carrez thie...@openstack.org wrote:

 Doug Hellmann wrote:
 The outcome of the “Should Oslo continue to use alpha versions” session at 
 the summit [1] was unclear, so I would like to continue the discussion here.
 
 As we discussed at the summit, the primary reason for marking Oslo library 
 releases as alphas was to indicate that the library is under development and 
 not “stable”, so it should not be included in a deployment using stable 
 branches. 
 
 I think we were very close to being able to say that Oslo could stop using 
 Alpha versions for new library releases because we would pin the versions of 
 libraries used in the stable branches to MAJOR.MINOR+1 to only allow bug-fix 
 releases to appear in deployments using those branches. However, we will not 
 (and perhaps cannot) pin the versions of client libraries, and some of the 
 clients are now using oslo.utils and potentially other oslo libraries. This 
 would either break the clients (if they used a feature of an oslo library 
 not in the version of the library supported by the server) or the server (if 
 the oslo library is upgraded and a setuptools requirements check notices or 
 some feature has been removed from the oslo library).
 
 We came to this realization just as we were running out of time for the 
 session, so we did not come up with a solution. I wasn’t able to attend the 
 stable branch session, so I am hoping that someone who was there will be 
 able to explain a bit about the version pinning discussion and how that may, 
 or may not, affect Oslo library versioning.
 
 The stable branch discussion happened before the Alpha versioning one.
 In that discussion, we considered generally pinning dependencies for
 stable branches, to reduce breakage there and make it more likely we
 reach 15months+ of support.
 
 That said, since we don't have stable branches for client libraries, we
 didn't plan to pin those, so we might still need to bump the client
 libraries dependencies in stable/* requirements as they evolve.
 Technically, we wouldn't really be freezing the stable requirements, we
 would just bump them on a as needed basis rather than automatically.
 
 As far as the alpha versioning discussion goes, I think it could work,
 as long as a released client library won't depend on an alpha of an oslo
 library (which I think is a reasonable assumption). We would just need
 to somehow remember to bump the oslo library in stable/* requirements
 when the new client library depending on a new version of it is
 released. Not sure how we can do that before the client library bump
 breaks the branch, though ?

That’s basically what we’re doing now. If we allow Oslo lib versions with new 
features to make it into the stable branches, we introduce some risk of 
backwards-incompatible changes slipping into the libraries. It’s unlikely we’d 
have an API change like that, since they are easy to spot, but we may have 
untested behaviors that someone depends on (the recent case of syslog 
addressing moving from /dev/log to the UDP port is a good example of that). I’m 
personally OK with that risk, as long as its understood that there’s not a huge 
team of people ready to drop whatever they’re doing and fix issues when they 
arise.

I do remember a comment at some point, and I’m not sure it was in this session, 
about using the per-project client libraries as “internal only” libraries when 
the new SDK matures enough that we can declare that the official external 
client library. That might solve the problem, since we could pin the version of 
the client libraries used, but it seems like a solution for the future rather 
than for this cycle.

Another solution is to require the client libraries to include support for 
multiple versions of any libraries they use (Oslo or third-party), which allows 
us to pin those libraries in stable branches and still have more recent 
versions available in clients that aren’t running in the stable environments. 
That may mean the clients don’t include some features when running in older 
environments, but as long as no bug fixes are involved I don’t see that as a 
major technical issue (leaving aside the hassle of having to write 
version-aware code like that).

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Doug Hellmann

On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur dtant...@redhat.com wrote:

 On 11/12/2014 08:06 PM, Doug Hellmann wrote:
 During our “Graduation Schedule” summit session we worked through the list 
 of modules remaining the in the incubator. Our notes are in the etherpad 
 [1], but as part of the Write it Down” theme for Oslo this cycle I am also 
 posting a summary of the outcome here on the mailing list for wider 
 distribution. Let me know if you remembered the outcome for any of these 
 modules differently than what I have written below.
 
 Doug
 
 
 
 Deleted or deprecated modules:
 
 funcutils.py - This was present only for python 2.6 support, but it is no 
 longer used in the applications. We are keeping it in the stable/juno branch 
 of the incubator, and removing it from master 
 (https://review.openstack.org/130092)
 
 hooks.py - This is not being used anywhere, so we are removing it. 
 (https://review.openstack.org/#/c/125781/)
 
 quota.py - A new quota management system is being created 
 (https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
 replace this, so we will keep it in the incubator for now but deprecate it.
 
 crypto/utils.py - We agreed to mark this as deprecated and encourage the use 
 of Barbican or cryptography.py (https://review.openstack.org/134020)
 
 cache/ - Morgan is going to be working on a new oslo.cache library as a 
 front-end for dogpile, so this is also deprecated 
 (https://review.openstack.org/134021)
 
 apiclient/ - With the SDK project picking up steam, we felt it was safe to 
 deprecate this code as well (https://review.openstack.org/134024).
 
 xmlutils.py - This module was used to provide a security fix for some XML 
 modules that have since been updated directly. It was removed. 
 (https://review.openstack.org/#/c/125021/)
 
 
 
 Graduating:
 
 oslo.context:
 - Dims is driving this
 - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
 - includes:
  context.py
 
 oslo.service:
 - Sachi is driving this
 - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
 - includes:
  eventlet_backdoor.py
  loopingcall.py
  periodic_task.py
 By te way, right now I'm looking into updating this code to be able to run 
 tasks on a thread pool, not only in one thread (quite a problem for Ironic). 
 Does it somehow interfere with the graduation? Any deadlines or something?

Feature development on code declared ready for graduation is basically frozen 
until the new library is created. You should plan on doing that work in the new 
oslo.service repository, which should be showing up soon. And the you describe 
feature sounds like something for which we would want a spec written, so please 
consider filing one when you have some of the details worked out.

 
  request_utils.py
  service.py
  sslutils.py
  systemd.py
  threadgroup.py
 
 oslo.utils:
 - We need to look into how to preserve the git history as we import these 
 modules.
 - includes:
  fileutils.py
  versionutils.py
 
 
 
 Remaining untouched:
 
 scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
 whether Gantt has enough traction yet so we will hold onto these in the 
 incubator for at least another cycle.
 
 report/ - There’s interest in creating an oslo.reports library containing 
 this code, but we haven’t had time to coordinate with Solly about doing that.
 
 
 
 Other work:
 
 We will continue the work on oslo.concurrency and oslo.log that we started 
 during Juno.
 
 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Radomir Dopieralski
On 13/11/14 01:32, Richard Jones wrote:
[...]

 We're currently using xstatic and that works with Linux packaging
 because it was designed to cope with being a global installation. The
 current Horizon codebase has a django-xstatic plugin which further makes
 dealing with xstatic components nicer - for example it handles path
 management and static file compilation (JS minification and
 concatenation, for example). That's really nice, but poses some problems:
 
 - we would need to xstatic-ify (and deb/rpm-ify) all those components

Yes. They will need to be deb/rpm/arch/slack/whatever-ified anyways,
because that's how the Linux distributions that are going to ship them work.

 - we could run into global version conflict issues if we run more than
 one service on a system - is this likely to be an issue in practise though?

Yes, this is an issue in practice, and that's why the packagers have a
say in what libraries and in what versions you are adding to the
global-requirements. We have to use versions that are the least problematic.

 - as far as I'm aware, the xstatic JS minification is not angular-aware,
 and will break angular code that has not been written to be
 dumb-minifier-aware (the angular minifier ngMin is written in node and
 knows how to do things more correctly); adding dumb-minifier-awareness
 to angular code makes it ugly and more error-prone :/

You can use any minifier with the django-compress plugin that Horizon
uses (django-xstatic has nothing to do with it). You just define the
command (or a filter written in Python) to use for every mime type.

But I assume that ngMin is written in the Node.js language (which is
superficially similar to JavaScript) and therefore if we used it, you
would have to convince your fellow system administrators to install
node.js on their production servers. Violence may result.

[...]
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Migration from Nova-net to Neutron

2014-11-13 Thread Hassaan Pasha
Dear all,

I have some questions related to the migration of instances from nova-net
to neutron.

Assuming we have deployed Openstack with neutron disabled and we are
running some instances on it.

Now suppose we want to migrate the instances to neutron, what mechanism
would we require considering we don't have neutron running.

Is there a way to have nova-net and neutron running simultaneously?
Can we enable neutron services during run time?

How are we handling the migration process. If we need to deploy a separate
stack with neutron enabled, I don't understand how nova would manage to
migrate the instances.

I would really appreciate your help to better understand how the migration
process would be managed.

Regards
Hassaan Pasha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Murugan, Visnusaran
Parallel worker was what I initially thought. But what to do if the engine 
hosting that worker goes down?

-Vishnu

From: Angus Salkeld [mailto:asalk...@mirantis.com]
Sent: Thursday, November 13, 2014 5:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

On Thu, Nov 13, 2014 at 6:29 PM, Murugan, Visnusaran 
visnusaran.muru...@hp.commailto:visnusaran.muru...@hp.com wrote:
Hi all,

Convergence-POC distributes stack operations by sending resource actions over 
RPC for any heat-engine to execute. Entire stack lifecycle will be controlled 
by worker/observer notifications. This distributed model has its own advantages 
and disadvantages.

Any stack operation has a timeout and a single engine will be responsible for 
it. If that engine goes down, timeout is lost along with it. So a traditional 
way is for other engines to recreate timeout from scratch. Also a missed 
resource action notification will be detected only when stack operation timeout 
happens.

To overcome this, we will need the following capability:

1.   Resource timeout (can be used for retry)
We will shortly have a worker job, can't we have a job that just sleeps that 
gets started in parallel with the job that is doing the work?
It gets to the end of the sleep and runs a check.

2.   Recover from engine failure (loss of stack timeout, resource action 
notification)


My suggestion above could catch failures as long as it was run in a different 
process.
-Angus


Suggestion:

1.   Use task queue like celery to host timeouts for both stack and 
resource.

2.   Poll database for engine failures and restart timers/ retrigger 
resource retry (IMHO: This would be a traditional and weighs heavy)

3.   Migrate heat to use TaskFlow. (Too many code change)

I am not suggesting we use Task Flow. Using celery will have very minimum code 
change. (decorate appropriate functions)


Your thoughts.

-Vishnu
IRC: ckmvishnu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Fwd: Re: [Openstack-stable-maint] Neutron backports for security group performance

2014-11-13 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 12/11/14 17:43, Kevin Benton wrote:
 This is awesome. I seem to have misplaced my 540-node cluster. ;-)
 
 Is it possible for you to also patch in
 https://review.openstack.org/#/c/132372/ ? In my rally testing of 
 port retrieval, this one probably made the most significant
 improvement.

Unfortunately not - our lab time on the infrastructure ended last week
and I had to (reluctantly) give everything back to HP.

That said, looking through all of the patches I applied to neutron, I
 had that one in place as well - apologies for missing that
information in my first email!.

Regards

James

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBCAAGBQJUZKy/AAoJEL/srsug59jDGa8QANJjKl8fyCmoE0FNZ0/xXnq0
qYu8u0yYm1SPya09KQaSmMUkMACjgiemjEKD/lICQASd/ROPMMRoqmbfiogDzDLZ
Si4U4CsYYy+EVnXQ3ozOopxbZHKNjjbTFBhNNvVeEQ1/sZpTHEdI6emwXlOuj6qP
Z36RmJpr1rQDhvvccywytVI2a42MbUnT53yjI4AKIc5TQBdPOW6QIr89sNNZM+jp
frNl40tCFo/SQU2TR3mmBXdXWYT5BAdNyAHBz/7TUNzSt5ZUXBSr/3lE2Vj69aZ6
ioMBwreeW+hV2NXYjLCpCAOsam7lz3qZjOC5DtZj4OrIy+J8ts73uHvPe2y0Gxr/
ANrbxPeRPp1uXAT4UPUqQZ4m2vYQVVwenc8cPQtzcXrJ9CF9ti8NrFnATtqdSf3a
2kWyKmJ1qd+6tValdImTFc/J7Vw/WPkTvoYXGAfszL6j0Ea6JGCvGCCvDOFZwG3o
NWGBaIVCAErlypDaqxQGfiUtsGWIrFfy52ufJ+YEc0L/pIq9ZUlrHE17LkUz2gC2
GTUbLYQ8+S+/b5suYzbthA+SHgc+Xzfzh+K+sCirEFzNaAhzJySvr7ssCRoKvs0d
QDoLaSGdwNDKjW/Y7O/eGHD1bz6RVfMxvky+pa8GZBHIp/YhEuBSNU3CNNEAt6El
/rWfIhMsjPtHlhHF245x
=Bnsb
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term discovery

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 12:27 PM, Ganapathy, Sandhya wrote:

Hi All,

Based on the discussions, I have filed a blue print that initiates discovery of 
node hardware details given its credentials at chassis level. I am in the 
process of creating a spec for it. Do share your thoughts regarding this -

https://blueprints.launchpad.net/ironic/+spec/chassis-level-node-discovery
Hi and thank you for the suggestion. As already said, this thread is not 
the best place to discuss it, so please file a (short version of) spec, 
so that we can comment on it.


Thanks,
Sandhya.

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, November 13, 2014 2:20 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] disambiguating the term discovery

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing
DISCOVERING to INTROSPECTING in the new state machine spec is a good idea?

As before I'm uncertain. Discovery is a troublesome term, but too many people 
use and recognize it, while IMO introspecting is much less common. So count me 
as -0 on this.



On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
sandhya.ganapa...@hp.com mailto:sandhya.ganapa...@hp.com wrote:

 Hi all,

 Following the mail thread on disambiguating the term 'discovery' -

 In the lines of what Devananda had stated, Hardware Introspection
 also means retrieving and storing hardware details of the node whose
 credentials and IP Address are known to the system. (Correct me if I
 am wrong).

 I am currently in the process of extracting hardware details (cpu,
 memory etc..) of n no. of nodes belonging to a Chassis whose
 credentials are already known to ironic. Does this process fall in
 the category of hardware introspection?

 Thanks,
 Sandhya.

 -Original Message-
 From: Devananda van der Veen [mailto:devananda@gmail.com
 mailto:devananda@gmail.com]
 Sent: Tuesday, October 21, 2014 5:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Ironic] disambiguating the term discovery

 Hi all,

 I was reminded in the Ironic meeting today that the words hardware
 discovery are overloaded and used in different ways by different
 people. Since this is something we are going to talk about at the
 summit (again), I'd like to start the discussion by building
 consensus in the language that we're going to use.

 So, I'm starting this thread to explain how I use those two words,
 and some other words that I use to mean something else which is what
 some people mean when they use those words. I'm not saying my words
 are the right words -- they're just the words that make sense to my
 brain right now. If someone else has better words, and those words
 also make sense (or make more sense) then I'm happy to use those
 instead.

 So, here are rough definitions for the terms I've been using for the
 last six months to disambiguate this:

 hardware discovery
 The process or act of identifying hitherto unknown hardware, which
 is addressable by the management system, in order to later make it
 available for provisioning and management.

 hardware introspection
 The process or act of gathering information about the properties or
 capabilities of hardware already known by the management system.


 Why is this disambiguation important? At the last midcycle, we
 agreed that hardware discovery is out of scope for Ironic --
 finding new, unmanaged nodes and enrolling them with Ironic is best
 left to other services or processes, at least for the forseeable future.

 However, introspection is definitely within scope for Ironic. Even
 though we couldn't agree on the details during Juno, we are going to
 revisit this at the Kilo summit. This is an important feature for
 many of our current users, and multiple proof of concept
 implementations of this have been done by different parties over the
 last year.

 It may be entirely possible that no one else in our developer
 community is using the term introspection in the way that I've
 defined it above -- if so, that's fine, I can stop calling that
 introspection, but I don't know a better word for the thing that
 is find-unknown-hardware.

 Suggestions welcome,
 Devananda


 P.S.

 For what it's worth, googling for hardware discovery yields
 several results related to identifying unknown network-connected
 devices and adding them to inventory systems, which is the way that
 I'm using the term right now, so I don't feel completely off in
 continuing to say discovery when I mean find unknown network
 devices and add them to Ironic.

 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit

2014-11-13 Thread Doug Hellmann

On Nov 13, 2014, at 3:38 AM, Flavio Percoco fla...@redhat.com wrote:

 On 12/11/14 15:22 -0500, Doug Hellmann wrote:
 The oslo.messaging session at the summit [1] resulted in some plans to 
 evolve how oslo.messaging works, but probably not during this cycle.
 
 First, we talked about what to do about the various drivers like ZeroMQ and 
 the new AMQP 1.0 driver. We decided that rather than moving those out of the 
 main tree and packaging them separately, we would keep them all in the main 
 repository to encourage the driver authors to help out with the core library 
 (oslo.messaging is a critical component of OpenStack, and we’ve lost several 
 of our core reviewers for the library to other priorities recently).
 
 There is a new set of contributors interested in maintaining the ZeroMQ 
 driver, and they are going to work together to review each other’s patches. 
 We will re-evaluate keeping ZeroMQ at the end of Kilo, based on how things 
 go this cycle.
 
 I'd like to thank the folks that have stepped up for this driver. It's
 great to see that there's some interest in cleaning it up and
 maintaining it.
 
 That said, if at the end of Kilo the zmq driver is still not in a
 usable/maintainable mode, I'd like us to be more strict with the plans
 forward for it. We asked for support in the last 3 summits with bad
 results for the previous 2 releases.
 
 I don't mean to sound rude and I do believe the folks that have
 stepped up will do a great job. Still, I'd like us to learn from
 previous experiences and have a better plan for this driver (and
 future cases like this one).

Absolutely. It seems that each time we ask for help, a new set of contributors 
step up. This is, I think, the third cycle where that has been the case? Three 
being widely recognized as a magic number (check your retry loops), either “the 
third time’s the charm” or “three strikes and you’re out” may apply in this 
case as well. :-) Of course, their success depends a great deal on *us* to 
review the changes, as well.

 
 
 We also talked about the fact that the new version of Kombu includes some of 
 the features we have implemented in our own driver, like heartbeats and 
 connection management. Kombu does not include the calling patterns 
 (cast/call/notifications) that we have in oslo.messaging, but we may be able 
 to remove some code from our driver and consolidate the qpid and rabbit 
 driver code to let Kombu do more of the work for us.
 
 This sounds great. Please, whoever is going to work on this, feel add
 me to the reviews.
 
 Python 3 support is coming slowly. There are a couple of patches up for 
 review to provide a different sort of executor based on greenio and 
 trollius. Adopting that would require some application-level changes to use 
 co-routines, so it may not be an optimal solution even though it would get 
 us off of eventlet. (During the Python 3 session later in the week we talked 
 about the possibility of fixing eventlet’s monkey-patching to allow us to 
 use the new eventlet under python 3.)
 
 We also talked about the way the oslo.messaging API uses URLs to get some 
 settings and configuration options for others. I thought I remembered this 
 being a conscious decision to pass connection-specific parameters in the 
 URL, and “global” parameters via configuration settings. It sounds like that 
 split may not have been implemented as cleanly as originally intended, 
 though. We identified documenting URL parameters as an issue for removing 
 the configuration object, as well as backwards-compatibility. I don’t think 
 we agreed on any specific changes to the API based on this part of the 
 discussion, but please correct me if your recollection is different.
 
 I prefer URL parameters to specify options. As of now, I think we
 treat URL parameters and config options as two different things. Is
 this something we can change and translate URL parameters to config
 options?
 
 I guess if we get to that point, we'd end up asking ourselves: Why
 shouldn't we use just config options in that case?
 
 I think one - historical (?) - answer to that is that we once thought
 about not using oslo.config in oslo.messaging.

That’s true, and another reason was to handle cases like an arbitrary number of 
connections (ceilometer needed more than one, though I don’t remember if it was 
truly arbitrary). Normally the messaging settings in the config file are in one 
group, and the application doesn’t know the name of the group. If we use the 
config file to pull in all of the settings the application would need to know 
the group name(s) because the options themselves are (a) not part of the API 
and (b) not necessarily registered on the global config object (the library 
might use the config filter to hide the option settings). I think we decided 
that URLs were less ugly than exposing the config group names, though they do 
introduce some other complexity. 

As Josh points out in his message in this thread, the thing 

Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Doug Hellmann

On Nov 13, 2014, at 8:31 AM, Dmitry Tantsur dtant...@redhat.com wrote:

 On 11/13/2014 01:54 PM, Doug Hellmann wrote:
 
 On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur dtant...@redhat.com wrote:
 
 On 11/12/2014 08:06 PM, Doug Hellmann wrote:
 During our “Graduation Schedule” summit session we worked through the list 
 of modules remaining the in the incubator. Our notes are in the etherpad 
 [1], but as part of the Write it Down” theme for Oslo this cycle I am 
 also posting a summary of the outcome here on the mailing list for wider 
 distribution. Let me know if you remembered the outcome for any of these 
 modules differently than what I have written below.
 
 Doug
 
 
 
 Deleted or deprecated modules:
 
 funcutils.py - This was present only for python 2.6 support, but it is no 
 longer used in the applications. We are keeping it in the stable/juno 
 branch of the incubator, and removing it from master 
 (https://review.openstack.org/130092)
 
 hooks.py - This is not being used anywhere, so we are removing it. 
 (https://review.openstack.org/#/c/125781/)
 
 quota.py - A new quota management system is being created 
 (https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and 
 should replace this, so we will keep it in the incubator for now but 
 deprecate it.
 
 crypto/utils.py - We agreed to mark this as deprecated and encourage the 
 use of Barbican or cryptography.py (https://review.openstack.org/134020)
 
 cache/ - Morgan is going to be working on a new oslo.cache library as a 
 front-end for dogpile, so this is also deprecated 
 (https://review.openstack.org/134021)
 
 apiclient/ - With the SDK project picking up steam, we felt it was safe to 
 deprecate this code as well (https://review.openstack.org/134024).
 
 xmlutils.py - This module was used to provide a security fix for some XML 
 modules that have since been updated directly. It was removed. 
 (https://review.openstack.org/#/c/125021/)
 
 
 
 Graduating:
 
 oslo.context:
 - Dims is driving this
 - 
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
 - includes:
context.py
 
 oslo.service:
 - Sachi is driving this
 - 
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
 - includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
 By te way, right now I'm looking into updating this code to be able to run 
 tasks on a thread pool, not only in one thread (quite a problem for 
 Ironic). Does it somehow interfere with the graduation? Any deadlines or 
 something?
 
 Feature development on code declared ready for graduation is basically 
 frozen until the new library is created. You should plan on doing that work 
 in the new oslo.service repository, which should be showing up soon. And the 
 you describe feature sounds like something for which we would want a spec 
 written, so please consider filing one when you have some of the details 
 worked out.
 Sure, right now I'm experimenting in Ironic tree to figure out how it really 
 works. There's a single oslo-specs repo for the whole oslo, right?

Yes, that’s right openstack/oslo-specs. Having a branch somewhere as a 
reference would be great for the spec reviewers, so that seems like a good way 
to start.

Doug

 
 
 
request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py
 
 oslo.utils:
 - We need to look into how to preserve the git history as we import these 
 modules.
 - includes:
fileutils.py
versionutils.py
 
 
 
 Remaining untouched:
 
 scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
 whether Gantt has enough traction yet so we will hold onto these in the 
 incubator for at least another cycle.
 
 report/ - There’s interest in creating an oslo.reports library containing 
 this code, but we haven’t had time to coordinate with Solly about doing 
 that.
 
 
 
 Other work:
 
 We will continue the work on oslo.concurrency and oslo.log that we started 
 during Juno.
 
 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Zane Bitter

On 13/11/14 06:52, Angus Salkeld wrote:

On Thu, Nov 13, 2014 at 6:29 PM, Murugan, Visnusaran
visnusaran.muru...@hp.com mailto:visnusaran.muru...@hp.com wrote:

Hi all,

__ __

Convergence-POC distributes stack operations by sending resource
actions over RPC for any heat-engine to execute. Entire stack
lifecycle will be controlled by worker/observer notifications. This
distributed model has its own advantages and disadvantages.

__ __

Any stack operation has a timeout and a single engine will be
responsible for it. If that engine goes down, timeout is lost along
with it. So a traditional way is for other engines to recreate
timeout from scratch. Also a missed resource action notification
will be detected only when stack operation timeout happens. __ __

__ __

To overcome this, we will need the following capability:

__1.__Resource timeout (can be used for retry)

We will shortly have a worker job, can't we have a job that just sleeps
that gets started in parallel with the job that is doing the work?
It gets to the end of the sleep and runs a check.


What if that worker dies too? There's no guarantee that it'd even be a 
different worker. In fact, there's not even a guarantee that we'd have 
multiple workers.


BTW Steve Hardy's suggestion, which I have more or less come around to, 
is that the engines themselves should be the workers in convergence, to 
save operators deploying two types of processes. (The observers will 
still be a separate process though, in phase 2.)





__2.__Recover from engine failure (loss of stack timeout, resource
action notification)

__


My suggestion above could catch failures as long as it was run in a
different process.

-Angus

__

__ __

Suggestion:

__1.__Use task queue like celery to host timeouts for both stack and
resource.

__2.__Poll database for engine failures and restart timers/
retrigger resource retry (IMHO: This would be a traditional and
weighs heavy)

__3.__Migrate heat to use TaskFlow. (Too many code change)

__ __

I am not suggesting we use Task Flow. Using celery will have very
minimum code change. (decorate appropriate functions) 

__ __

__ __

Your thoughts.

__ __

-Vishnu

IRC: ckmvishnu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 01:15 PM, Lucas Alvares Gomes wrote:

This was discussed in the Contributor Meetup on Friday at the Summit
but I think it's important to share on the mail list too so we can get
more opnions/suggestions/comments about it.

In the Ironic weekly meeting we dedicate a good time of the meeting to
do some announcements, reporting bug status, CI status, oslo status,
specific drivers status, etc... It's all good information, but I
believe that the mail list would be a better place to report it and
then we can free some time from our meeting to actually discuss
things.

Are you guys in favor of it?

If so I'd like to propose a new format based on the discussions we had
in Paris. For the people doing the status report on the meeting, they
would start adding the status to an etherpad and then we would have a
responsible person to get this information and send it to the mail
list once a week.

For the meeting itself we have a wiki page with an agenda[1] which
everyone can edit to put the topic they want to discuss in the meeting
there, I think that's fine and works. The only change about it would
be that we may want freeze the agenda 2 days before the meeting so
people can take a look at the topics that will be discussed and
prepare for it; With that we can move forward quicker with the
discussions because people will be familiar with the topics already.

Let me know what you guys think.
I'm not really fond of it (like every process complication) but it looks 
inevitable, so +1.




[1] https://wiki.openstack.org/wiki/Meetings/Ironic

Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Migration from Nova-net to Neutron

2014-11-13 Thread Oleg Bondarev
Hi,

please see answers inline.

On Thu, Nov 13, 2014 at 2:59 PM, Hassaan Pasha pasha.hass...@gmail.com
wrote:

 Dear all,

 I have some questions related to the migration of instances from nova-net
 to neutron.

 Assuming we have deployed Openstack with neutron disabled and we are
 running some instances on it.

 Now suppose we want to migrate the instances to neutron, what mechanism
 would we require considering we don't have neutron running.


 Is there a way to have nova-net and neutron running simultaneously?
 Can we enable neutron services during run time?


AFAIK nothing prevents neutron from running along with nova-net. It only
matters how nova is configured (whether it uses neutron or nova-net as
primary network API)


 How are we handling the migration process. If we need to deploy a separate
 stack with neutron enabled, I don't understand how nova would manage to
 migrate the instances.


With the current plan (see
https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron) you
will not have to have a separate stack.



 I would really appreciate your help to better understand how the migration
 process would be managed.


Neutron migration is under design now, you'll be able to find more details
regarding the process once the specification is published.


 Regards
 Hassaan Pasha

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] reminder about meeting time change

2014-11-13 Thread Doug Hellmann
Before the summit the Oslo team agreed to change our meeting time. Next week 
will be the first time we meed under the new schedule.

17 Nov 2014 (and every Monday for the rest of Kilo, unless otherwise announced)
16:00 UTC
IRC: #openstack-meeting-alt
Agenda: https://wiki.openstack.org/wiki/Meetings/Oslo

Cores, liaisons, and other interested parties please try to make it to the 
meetings. We tend to use the meetings to ensure that we’re aware of critical 
issues and set review priorities for the coming week.

See you there!
Doug



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Zane Bitter

On 13/11/14 03:29, Murugan, Visnusaran wrote:

Hi all,

Convergence-POC distributes stack operations by sending resource actions
over RPC for any heat-engine to execute. Entire stack lifecycle will be
controlled by worker/observer notifications. This distributed model has
its own advantages and disadvantages.

Any stack operation has a timeout and a single engine will be
responsible for it. If that engine goes down, timeout is lost along with
it. So a traditional way is for other engines to recreate timeout from
scratch. Also a missed resource action notification will be detected
only when stack operation timeout happens.

To overcome this, we will need the following capability:

1.Resource timeout (can be used for retry)


I don't believe this is strictly needed for phase 1 (essentially we 
don't have it now, so nothing gets worse).


For phase 2, yes, we'll want it. One thing we haven't discussed much is 
that if we used Zaqar for this then the observer could claim a message 
but not acknowledge it until it had processed it, so we could have 
guaranteed delivery.



2.Recover from engine failure (loss of stack timeout, resource action
notification)

Suggestion:

1.Use task queue like celery to host timeouts for both stack and resource.


I believe Celery is more or less a non-starter as an OpenStack 
dependency because it uses Kombu directly to talk to the queue, vs. 
oslo.messaging which is an abstraction layer over Kombu, Qpid, ZeroMQ 
and maybe others in the future. i.e. requiring Celery means that some 
users would be forced to install Rabbit for the first time.


One option would be to fork Celery and replace Kombu with oslo.messaging 
as its abstraction layer. Good luck getting that maintained though, 
since Celery _invented_ Kombu to be it's abstraction layer.



2.Poll database for engine failures and restart timers/ retrigger
resource retry (IMHO: This would be a traditional and weighs heavy)

3.Migrate heat to use TaskFlow. (Too many code change)


If it's just handling timed triggers (maybe this is closer to #2) and 
not migrating the whole code base, then I don't see why it would be a 
big change (or even a change at all - it's basically new functionality). 
I'm not sure if TaskFlow has something like this already. If not we 
could also look at what Mistral is doing with timed tasks and see if we 
could spin some of it out into an Oslo library.


cheers,
Zane.


I am not suggesting we use Task Flow. Using celery will have very
minimum code change. (decorate appropriate functions)

Your thoughts.

-Vishnu

IRC: ckmvishnu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Murugan, Visnusaran
Zane,

We do follow shardy's suggestion of having worker/observer as eventlet in 
heat-engine. No new process. The timer will be executed under an engine's 
worker.

Question:
1. heat-engine processing resource-action failed (process killed)
2. heat-engine processing timeout for a stack fails (process killed)

In the above mentioned cases, I thought celery tasks would come to our rescue.

Convergence-poc implementation can recover from error and retry if there is a 
notification available.


-Vishnu

-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com] 
Sent: Thursday, November 13, 2014 7:05 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

On 13/11/14 06:52, Angus Salkeld wrote:
 On Thu, Nov 13, 2014 at 6:29 PM, Murugan, Visnusaran 
 visnusaran.muru...@hp.com mailto:visnusaran.muru...@hp.com wrote:

 Hi all,

 __ __

 Convergence-POC distributes stack operations by sending resource
 actions over RPC for any heat-engine to execute. Entire stack
 lifecycle will be controlled by worker/observer notifications. This
 distributed model has its own advantages and disadvantages.

 __ __

 Any stack operation has a timeout and a single engine will be
 responsible for it. If that engine goes down, timeout is lost along
 with it. So a traditional way is for other engines to recreate
 timeout from scratch. Also a missed resource action notification
 will be detected only when stack operation timeout happens. __ __

 __ __

 To overcome this, we will need the following capability:

 __1.__Resource timeout (can be used for retry)

 We will shortly have a worker job, can't we have a job that just 
 sleeps that gets started in parallel with the job that is doing the work?
 It gets to the end of the sleep and runs a check.

What if that worker dies too? There's no guarantee that it'd even be a 
different worker. In fact, there's not even a guarantee that we'd have multiple 
workers.

BTW Steve Hardy's suggestion, which I have more or less come around to, is that 
the engines themselves should be the workers in convergence, to save operators 
deploying two types of processes. (The observers will still be a separate 
process though, in phase 2.)

 

 __2.__Recover from engine failure (loss of stack timeout, resource
 action notification)

 __


 My suggestion above could catch failures as long as it was run in a 
 different process.

 -Angus

 __

 __ __

 Suggestion:

 __1.__Use task queue like celery to host timeouts for both stack and
 resource.

 __2.__Poll database for engine failures and restart timers/
 retrigger resource retry (IMHO: This would be a traditional and
 weighs heavy)

 __3.__Migrate heat to use TaskFlow. (Too many code change)

 __ __

 I am not suggesting we use Task Flow. Using celery will have very
 minimum code change. (decorate appropriate functions) 

 __ __

 __ __

 Your thoughts.

 __ __

 -Vishnu

 IRC: ckmvishnu


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Dan Smith
 On 12/11/14 19:39, Mike Bayer wrote:
 lets keep in mind my everyone-likes-it-so-far proposal for reader()
 and writer(): https://review.openstack.org/#/c/125181/   (this is
 where it’s going to go as nobody has -1’ed it, so in absence of any
 “no way!” votes I have to assume this is what we’re going with).
 
 Dan,
 
 Note that this model, as I understand it, would conflict with storing
 context in NovaObject.

Why do you think that? As you pointed out, the above model is purely
SQLA code, which is run by an object, long after the context has been
resolved, the call has been remoted, etc.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Jastrzebski, Michal
Guys, I don't think we want to get into this cluster management mud. You say 
let's
make observer...and what if observer dies? Do we do observer to observer? And 
then
there is split brain. I'm observer, I've lost connection to worker. Should I 
restart a worker?
Maybe I'm one who lost connection to the rest of the world? Should I resume 
task and risk
duplicate workload?

And then there is another problem. If there is timeout caused by limit of 
resources of workers,
if  we restart whole workload after timeout, we will stretch these resources 
even further, and in turn
we'll get more timeouts (...) - great way to kill whole setup.

So we get to horizontal scalability. Or total lack of it. Any stack that is too 
complicated for single engine
to process will be impossible to process at all. We should find a way to 
distribute workloads in
active-active, stateless (as much as possible) manner.

Regards,
Michał inc0 Jastrzębski   

 -Original Message-
 From: Murugan, Visnusaran [mailto:visnusaran.muru...@hp.com]
 Sent: Thursday, November 13, 2014 2:59 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
 
 Zane,
 
 We do follow shardy's suggestion of having worker/observer as eventlet in
 heat-engine. No new process. The timer will be executed under an engine's
 worker.
 
 Question:
 1. heat-engine processing resource-action failed (process killed) 2. heat-
 engine processing timeout for a stack fails (process killed)
 
 In the above mentioned cases, I thought celery tasks would come to our
 rescue.
 
 Convergence-poc implementation can recover from error and retry if there is
 a notification available.
 
 
 -Vishnu
 
 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Thursday, November 13, 2014 7:05 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
 
 On 13/11/14 06:52, Angus Salkeld wrote:
  On Thu, Nov 13, 2014 at 6:29 PM, Murugan, Visnusaran
  visnusaran.muru...@hp.com mailto:visnusaran.muru...@hp.com
 wrote:
 
  Hi all,
 
  __ __
 
  Convergence-POC distributes stack operations by sending resource
  actions over RPC for any heat-engine to execute. Entire stack
  lifecycle will be controlled by worker/observer notifications. This
  distributed model has its own advantages and disadvantages.
 
  __ __
 
  Any stack operation has a timeout and a single engine will be
  responsible for it. If that engine goes down, timeout is lost along
  with it. So a traditional way is for other engines to recreate
  timeout from scratch. Also a missed resource action notification
  will be detected only when stack operation timeout happens. __ __
 
  __ __
 
  To overcome this, we will need the following capability:
 
  __1.__Resource timeout (can be used for retry)
 
  We will shortly have a worker job, can't we have a job that just
  sleeps that gets started in parallel with the job that is doing the work?
  It gets to the end of the sleep and runs a check.
 
 What if that worker dies too? There's no guarantee that it'd even be a
 different worker. In fact, there's not even a guarantee that we'd have
 multiple workers.
 
 BTW Steve Hardy's suggestion, which I have more or less come around to, is
 that the engines themselves should be the workers in convergence, to save
 operators deploying two types of processes. (The observers will still be a
 separate process though, in phase 2.)
 
  
 
  __2.__Recover from engine failure (loss of stack timeout, resource
  action notification)
 
  __
 
 
  My suggestion above could catch failures as long as it was run in a
  different process.
 
  -Angus
 
  __
 
  __ __
 
  Suggestion:
 
  __1.__Use task queue like celery to host timeouts for both stack and
  resource.
 
  __2.__Poll database for engine failures and restart timers/
  retrigger resource retry (IMHO: This would be a traditional and
  weighs heavy)
 
  __3.__Migrate heat to use TaskFlow. (Too many code change)
 
  __ __
 
  I am not suggesting we use Task Flow. Using celery will have very
  minimum code change. (decorate appropriate functions) 
 
  __ __
 
  __ __
 
  Your thoughts.
 
  __ __
 
  -Vishnu
 
  IRC: ckmvishnu
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Matthew Booth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 13/11/14 14:26, Dan Smith wrote:
 On 12/11/14 19:39, Mike Bayer wrote:
 lets keep in mind my everyone-likes-it-so-far proposal for
 reader() and writer(): https://review.openstack.org/#/c/125181/
 (this is where it’s going to go as nobody has -1’ed it, so in
 absence of any “no way!” votes I have to assume this is what
 we’re going with).
 
 Dan,
 
 Note that this model, as I understand it, would conflict with
 storing context in NovaObject.
 
 Why do you think that? As you pointed out, the above model is
 purely SQLA code, which is run by an object, long after the context
 has been resolved, the call has been remoted, etc.

Can we guarantee that the lifetime of a context object in conductor is
a single rpc call, and that the object cannot be referenced from any
other thread? Seems safer just to pass it around.

Matt
- -- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlRkwTMACgkQNEHqGdM8NJBHMwCdF6RpkpFSXitHfGfOmL0Iw/wr
f/8AnRxozN/LusnermjbZffmvuyoFub7
=S6KI
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Dan Smith
 Can we guarantee that the lifetime of a context object in conductor is
 a single rpc call, and that the object cannot be referenced from any
 other thread?

Yes, without a doubt.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Current development focus

2014-11-13 Thread Mike Scherbakov
Fuelers,
among bugs we have to work on, please go over 5.1.1 milestone in the first
order. The plan was to declare a code freeze today, but we still have a
number of bugs opened there. We need your help here.

Also, at
https://review.openstack.org/#/c/130717/
we have release notes draft. It also needs an attention.

Thanks,

On Wed, Nov 12, 2014 at 12:17 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Folks,
 as we all getting hurry with features landing before Feature Freeze
 deadline, we destabilize master. Right after FF, we must be focused on
 stability, and bug squashing.
 Now we are approaching Soft Code Freeze [1], which is planned for Nov
 13th. Master is still not very stable, and we are getting intermittent
 build failures.

 Let's focus on bug squashing, and in a first order, critical bugs which
 are known to be causing BVT tests failures. Please postpone, if possible,
 other action items, such as researches for new features, before we are
 known to be in good shape with release candidates.

 [1] https://wiki.openstack.org/wiki/Fuel/6.0_Release_Schedule

 Thanks,
 --
 Mike Scherbakov
 #mihgen




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Zane Bitter

On 13/11/14 09:31, Jastrzebski, Michal wrote:

Guys, I don't think we want to get into this cluster management mud. You say 
let's
make observer...and what if observer dies? Do we do observer to observer? And 
then
there is split brain. I'm observer, I've lost connection to worker. Should I 
restart a worker?
Maybe I'm one who lost connection to the rest of the world? Should I resume 
task and risk
duplicate workload?


I think you're misinterpreting what we mean by observer. See 
https://wiki.openstack.org/wiki/Heat/ConvergenceDesign


- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking support

2014-11-13 Thread Romil Gupta
Fine for me :)

On Thu, Nov 13, 2014 at 6:08 PM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 A few months back we started to work on a umbrella spec for Vmware
 networking support (https://review.openstack.org/#/c/105369). There are a
 number of different proposals for a number of different use cases. In
 addition to providing one another with an update of our progress we need to
 discuss the following challenges:

- At the summit there was talk about splitting out vendor code from
the neutron code base. The aforementioned specs are not being approved
until we have decided what we as a community want/need. We need to
understand how we can continue our efforts and not be blocked or hindered
by this debate.
- CI updates – in order to provide a new plugin we are required to
provide CI (yes, this is written in stone and in some cases marble)
- Additional support may be required in the following:
   - Nova – for example Neutron may be exposing extensions or
   functionality that requires Nova integrations
   - Devstack – In order to get CI up and running we need devatck
   support

 As a step forwards I would like to suggest that we meeting at
 #openstack-vmware channel on Tuesday at 15:00 UTC. Is this ok with everyone?
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Regards,*

*Romil *
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Radomir Dopieralski openst...@sheep.art.pl writes:

 On 11/11/14 08:02, Richard Jones wrote:

 [...]

 There were some discussions around tooling. We're using xstatic to
 manage 3rd party components, but there's a lot missing from that
 environment. I hesitate to add supporting xstatic components on to
 the already large pile of work we have to do, so would recommend we
 switch to managing those components with bower instead. For reference
 the list of 3rd party components I used in angboard* (which is really
 only a teensy fraction of the total application we'd end up with, so
 this components list is probably reduced):

 [...]

 Just looking at PyPI, it looks like only a few of those are in xstatic,
 and those are out of date.

 There is a very good reason why we only have a few external JavaScript
 libraries, and why they are in those versions.

 You see, we are not developing Horizon for our own enjoyment, or to
 install it at our own webserver and be done with it. What we write has
 to be then packaged for different Linux distributions by the
 packagers. [...]

Maybe a silly question, but why insist on this? Why would you insist on
installing a JavaScript based application using your package manager?

I'm a huge fan of package managers and typically refuse to install
anything globally if it doesn't come as a package.

However, the whole JavaScript ecosystem seems to be centered around the
idea of doing local installations. That means that you no longer need
the package manager to install the software -- you only need a package
manager to install the base system (NodeJs and npm for JavaScript).

Notice that Python has been moving rapidly in the same direction for
years: you only need Python and pip to bootstrap yourself. After getting
used to virtualenv, I've mostly stopped installing Python modules
globally and that is how the JavaScript world expects you to work too.
(Come to think of it, the same applies to some extend to Haskell and
Emacs where there also exist nice package managers that'll pull in and
manage dependencies for you.)

So maybe the Horizon package should be an installer package like the
ones that download fonts or Adobe?

That package would get the right version of node and which then runs the
npm and bower commands to download the rest plus (importantly and much
appreciated) puts the files in a sensible location and gives them good
permissions.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgph36NhgvYqz.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
 On 13/11/14 03:29, Murugan, Visnusaran wrote:
  Hi all,
 
  Convergence-POC distributes stack operations by sending resource actions
  over RPC for any heat-engine to execute. Entire stack lifecycle will be
  controlled by worker/observer notifications. This distributed model has
  its own advantages and disadvantages.
 
  Any stack operation has a timeout and a single engine will be
  responsible for it. If that engine goes down, timeout is lost along with
  it. So a traditional way is for other engines to recreate timeout from
  scratch. Also a missed resource action notification will be detected
  only when stack operation timeout happens.
 
  To overcome this, we will need the following capability:
 
  1.Resource timeout (can be used for retry)
 
 I don't believe this is strictly needed for phase 1 (essentially we 
 don't have it now, so nothing gets worse).
 

We do have a stack timeout, and it stands to reason that we won't have a
single box with a timeout greenthread after this, so a strategy is
needed.

 For phase 2, yes, we'll want it. One thing we haven't discussed much is 
 that if we used Zaqar for this then the observer could claim a message 
 but not acknowledge it until it had processed it, so we could have 
 guaranteed delivery.


Frankly, if oslo.messaging doesn't support reliable delivery then we
need to add it. Zaqar should have nothing to do with this and is, IMO, a
poor choice at this stage, though I like the idea of using it in the
future so that we can make Heat more of an outside-the-cloud app.

  2.Recover from engine failure (loss of stack timeout, resource action
  notification)
 
  Suggestion:
 
  1.Use task queue like celery to host timeouts for both stack and resource.
 
 I believe Celery is more or less a non-starter as an OpenStack 
 dependency because it uses Kombu directly to talk to the queue, vs. 
 oslo.messaging which is an abstraction layer over Kombu, Qpid, ZeroMQ 
 and maybe others in the future. i.e. requiring Celery means that some 
 users would be forced to install Rabbit for the first time.

 One option would be to fork Celery and replace Kombu with oslo.messaging 
 as its abstraction layer. Good luck getting that maintained though, 
 since Celery _invented_ Kombu to be it's abstraction layer.
 

A slight side point here: Kombu supports Qpid and ZeroMQ. Oslo.messaging
is more about having a unified API than a set of magic backends. It
actually boggles my mind why we didn't just use kombu (cue 20 reactions
with people saying it wasn't EXACTLY right), but I think we're committed
to oslo.messaging now. Anyway, celery would need no such refactor, as
kombu would be able to access the same bus as everything else just fine.

  2.Poll database for engine failures and restart timers/ retrigger
  resource retry (IMHO: This would be a traditional and weighs heavy)
 
  3.Migrate heat to use TaskFlow. (Too many code change)
 
 If it's just handling timed triggers (maybe this is closer to #2) and 
 not migrating the whole code base, then I don't see why it would be a 
 big change (or even a change at all - it's basically new functionality). 
 I'm not sure if TaskFlow has something like this already. If not we 
 could also look at what Mistral is doing with timed tasks and see if we 
 could spin some of it out into an Oslo library.
 

I feel like it boils down to something running periodically checking for
scheduled tasks that are due to run but have not run yet. I wonder if we
can actually look at Ironic for how they do this, because Ironic polls
power state of machines constantly, and uses a hash ring to make sure
only one conductor is polling any one machine at a time. If we broke
stacks up into a hash ring like that for the purpose of singleton tasks
like timeout checking, that might work out nicely.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] python-troveclient keystone v3 support breaking the world

2014-11-13 Thread Sean Dague
On 11/13/2014 07:14 AM, Ihar Hrachyshka wrote:
 On 12/11/14 15:17, Sean Dague wrote:
 
 1) just delete the trove exercise so we can move forward - 
 https://review.openstack.org/#/c/133930 - that will need to be 
 backported as well.
 
 The patch is merged. Do we still need to backport it baring in mind
 that client revert [1] was merged? I guess no, but better check.
 
 Also, since trove client is back in shape, should we revert your
 devstack patch?
 
 [1]: https://review.openstack.org/#/c/133958/

Honestly, devstack exercises are deprecated. I'd rather just keep it
out. They tend to be things that projects write once, then rot, and I
end up deleting or disabling them 6 months later.

Service testing should be in tempest, where we're in an environment
that's a bit more controlled, and has lots better debug information when
things go wrong.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Jastrzebski, Michal
By observer I mean process which will actually notify about stack timeout. 
Maybe it was poor  choice of words. Anyway, something will need to check what 
stacks are timed out, and that's new single point of failure.

 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Thursday, November 13, 2014 3:49 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
 
 On 13/11/14 09:31, Jastrzebski, Michal wrote:
  Guys, I don't think we want to get into this cluster management mud.
  You say let's make observer...and what if observer dies? Do we do
  observer to observer? And then there is split brain. I'm observer, I've lost
 connection to worker. Should I restart a worker?
  Maybe I'm one who lost connection to the rest of the world? Should I
  resume task and risk duplicate workload?
 
 I think you're misinterpreting what we mean by observer. See
 https://wiki.openstack.org/wiki/Heat/ConvergenceDesign
 
 - ZB
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 12:13 PM, Richard Jones wrote:
 the npm stuff is all tool chain; tools
 that I believe should be packaged as such by packagers.

npm is already in Debian:
https://packages.debian.org/sid/npm

However, just like we can't use CPAN, pear install, pip install and
such when building or installing package, we wont be able to use NPM.
This means every single dependency that isn't in Debian will need to be
packaged.

 Horizon is an incredibly complex application. Just so we're all on the
 same page, the components installed by bower for angboard are:
 
 angular
   Because writing an application the size of Horizon without it would be
 madness :)
 angular-route
   Provides structure to the application through URL routing.
 angular-cookies
   Provides management of browser cookies in a way that integrates well
 with angular.
 angular-sanitize
   Allows direct embedding of HTML into angular templates, with sanitization.
 json3
   Compatibility for older browsers so JSON works.
 es5-shim
   Compatibility for older browsers so Javascript (ECMAScript 5) works.
 angular-smart-table
   Table management (population, sorting, filtering, pagination, etc)
 angular-local-storage
Browser local storage with cookie fallback, integrated with angular
 mechanisms.
 angular-bootstrap
Extensions to angular that leverage bootstrap (modal popups, tabbed
 displays, ...)
 font-awesome
Additional glyphs to use in the user interface (warning symbol, info
 symbol, ...)
 boot
Bootstrap for CSS styling (this is the dependency that brings in
 jquery and requirejs)
 underscore
Javascript utility library providing a ton of features Javascript
 lacks but Python programmers expect.
 ng-websocket
Angular-friendly interface to using websockets
 angular-translate
Support for localization in angular using message catalogs generated
 by gettext/transifex.
 angular-mocks
Mocking support for unit testing angular code
 angular-scenario
More support for angular unit tests
 
 Additionally, angboard vendors term.js because it was very poorly
 packaged in the bower ecosystem. +1 for xstatic there I guess :)
 
 So those are the components we needed to create the prototype in a few
 weeks. Not using them would have added months (or possibly years) to the
 development time. Creating an application of the scale of Horizon
 without leveraging all that existing work would be like developing
 OpenStack while barring all use of Python 3rd-party packages.

I have no problem with adding dependencies. That's how things work, for
sure, I just want to make sure it doesn't become hell, with so many
components inter-depending on 100s of them, which would become not
manageable. If we define clear boundaries, then fine! The above seems
reasonable anyway.

Though did you list the dependencies of the above?

Also, if the Horizon project starts using something like NPM (which
again, is already available in Debian, so it has my preference), will we
at least be able to control what version gets in, just like with pip?
Because that's a huge concern for me, and this has been very well and
carefully addressed during the Juno cycle. I would very much appreciate
if the same kind of care was taken again during the Kilo cycle, whatever
path we take. How do I use npm by the way? Any pointer?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Rodrigo Duarte
Hi Doug,

I'm going to write the spec regarding the policy graduation, it will be
placed in the keystone-specs repository. I was wondering if someone have
examples of such specs so we can cover all necessary points.

On Thu, Nov 13, 2014 at 10:34 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Nov 13, 2014, at 8:31 AM, Dmitry Tantsur dtant...@redhat.com wrote:

  On 11/13/2014 01:54 PM, Doug Hellmann wrote:
 
  On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur dtant...@redhat.com
 wrote:
 
  On 11/12/2014 08:06 PM, Doug Hellmann wrote:
  During our “Graduation Schedule” summit session we worked through the
 list of modules remaining the in the incubator. Our notes are in the
 etherpad [1], but as part of the Write it Down” theme for Oslo this cycle
 I am also posting a summary of the outcome here on the mailing list for
 wider distribution. Let me know if you remembered the outcome for any of
 these modules differently than what I have written below.
 
  Doug
 
 
 
  Deleted or deprecated modules:
 
  funcutils.py - This was present only for python 2.6 support, but it
 is no longer used in the applications. We are keeping it in the stable/juno
 branch of the incubator, and removing it from master (
 https://review.openstack.org/130092)
 
  hooks.py - This is not being used anywhere, so we are removing it. (
 https://review.openstack.org/#/c/125781/)
 
  quota.py - A new quota management system is being created (
 https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and
 should replace this, so we will keep it in the incubator for now but
 deprecate it.
 
  crypto/utils.py - We agreed to mark this as deprecated and encourage
 the use of Barbican or cryptography.py (
 https://review.openstack.org/134020)
 
  cache/ - Morgan is going to be working on a new oslo.cache library as
 a front-end for dogpile, so this is also deprecated (
 https://review.openstack.org/134021)
 
  apiclient/ - With the SDK project picking up steam, we felt it was
 safe to deprecate this code as well (https://review.openstack.org/134024).
 
  xmlutils.py - This module was used to provide a security fix for some
 XML modules that have since been updated directly. It was removed. (
 https://review.openstack.org/#/c/125021/)
 
 
 
  Graduating:
 
  oslo.context:
  - Dims is driving this
  -
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
  - includes:
 context.py
 
  oslo.service:
  - Sachi is driving this
  -
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
  - includes:
 eventlet_backdoor.py
 loopingcall.py
 periodic_task.py
  By te way, right now I'm looking into updating this code to be able to
 run tasks on a thread pool, not only in one thread (quite a problem for
 Ironic). Does it somehow interfere with the graduation? Any deadlines or
 something?
 
  Feature development on code declared ready for graduation is basically
 frozen until the new library is created. You should plan on doing that work
 in the new oslo.service repository, which should be showing up soon. And
 the you describe feature sounds like something for which we would want a
 spec written, so please consider filing one when you have some of the
 details worked out.
  Sure, right now I'm experimenting in Ironic tree to figure out how it
 really works. There's a single oslo-specs repo for the whole oslo, right?

 Yes, that’s right openstack/oslo-specs. Having a branch somewhere as a
 reference would be great for the spec reviewers, so that seems like a good
 way to start.

 Doug

 
 
 
 request_utils.py
 service.py
 sslutils.py
 systemd.py
 threadgroup.py
 
  oslo.utils:
  - We need to look into how to preserve the git history as we import
 these modules.
  - includes:
 fileutils.py
 versionutils.py
 
 
 
  Remaining untouched:
 
  scheduler/ - Gantt probably makes this code obsolete, but it isn’t
 clear whether Gantt has enough traction yet so we will hold onto these in
 the incubator for at least another cycle.
 
  report/ - There’s interest in creating an oslo.reports library
 containing this code, but we haven’t had time to coordinate with Solly
 about doing that.
 
 
 
  Other work:
 
  We will continue the work on oslo.concurrency and oslo.log that we
 started during Juno.
 
  [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  

Re: [openstack-dev] TC election by the numbers

2014-11-13 Thread Thierry Carrez
Zane Bitter wrote:
 On 01/11/14 16:31, Eoghan Glynn wrote:
   1. *make a minor concession to proportionality* - while keeping the
  focus on consensus, e.g. by adopting the proportional Condorcet
  variant.
 
 It would be interesting to see the analysis again, but in the past this
 proved to not make much difference.

For the record, I just ran the ballots in CIVS proportional mode and
obtained the same set of winners:

http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_88cae988dff29be6

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 08:05 PM, Radomir Dopieralski wrote:
 On 11/11/14 08:02, Richard Jones wrote:
 
 [...]
 
 There were some discussions around tooling. We're using xstatic to
 manage 3rd party components, but there's a lot missing from that
 environment. I hesitate to add supporting xstatic components on to the
 already large pile of work we have to do, so would recommend we switch
 to managing those components with bower instead. For reference the list
 of 3rd party components I used in angboard* (which is really only a
 teensy fraction of the total application we'd end up with, so this
 components list is probably reduced):
 
 [...]
 
 Just looking at PyPI, it looks like only a few of those are in xstatic,
 and those are out of date.
 
 There is a very good reason why we only have a few external JavaScript
 libraries, and why they are in those versions.
 
 You see, we are not developing Horizon for our own enjoyment, or to
 install it at our own webserver and be done with it. What we write has
 to be then packaged for different Linux distributions by the packagers.
 Those packagers have very little wiggle room with respect to how they
 can package it all, and what they can include.
 
 In particular, libraries should get packaged separately, so that they
 can upgrade them and apply security patches and so on. Before we used
 xstatic, they have to go through the sources of Horizon file by file,
 and replace all of our bundled files with symlinks to what is provided
 in their distribution. Obviously that was laborious and introduced bugs
 when the versions of libraries didn't match.
 
 So now we have the xstatic system. That means, that the libraries are
 explicitly listed, with their minimum and maximum version numbers, and
 it's easy to make a dummy xstatic package that just points at some
 other location of the static files. This simplifies the work of the
 packagers.
 
 But the real advantage of using the xstatic packages is that in order to
 add them to Horizon, you need to add them to the global-requirements
 list, which is being watched and approved by the packagers themselves.
 That means, that when you try to introduce a new library, or a version
 of an old library, that is for some reason problematic for any of the
 distributions (due to licensing issues, due to them needing to remain at
 an older version, etc.), they get to veto it and you have a chance of
 resolving the problem early, not dropping it at the last moment on the
 packagers.
 
 Going back to the versions of the xstatic packages that we use, they are
 so old for a reason. Those are the newest versions that are available
 with reasonable effort in the distributions for which we make Horizon.
 
 If you want to replace this system with anything else, please keep in
 contact with the packagers to make sure that the resulting process makes
 sense and is acceptable for them.

Thanks a lot for all you wrote above. I 100% agree with it, and you
wrote it better than I would have. Also, I'd like to thank you for the
work we did together during the Juno cycle. Interactions and
communication on IRC were great. I just hope this continues for Kilo, on
the line of what you wrote above.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2014-11-13 00:45:07 -0800:
 A question;
 
 How is using something like celery in heat vs taskflow in heat (or at least 
 concept [1]) 'to many code change'.
 
 Both seem like change of similar levels ;-)
 

I've tried a few times to dive into refactoring some things to use
TaskFlow at a shallow level, and have always gotten confused and
frustrated.

The amount of lines that are changed probably is the same. But the
massive shift in thinking is not an easy one to make. It may be worth some
thinking on providing a shorter bridge to TaskFlow adoption, because I'm
a huge fan of the idea and would _start_ something with it in a heartbeat,
but refactoring things to use it feels really weird to me.

 What was your metric for determining the code change either would have (out 
 of curiosity)?
 
 Perhaps u should look at [2], although I'm unclear on what the desired 
 functionality is here.
 
 Do u want the single engine to transfer its work to another engine when it 
 'goes down'? If so then the jobboard model + zookeper inherently does this.
 
 Or maybe u want something else? I'm probably confused because u seem to be 
 asking for resource timeouts + recover from engine failure (which seems like 
 a liveness issue and not a resource timeout one), those 2 things seem 
 separable.
 

I agree with you on this. It is definitely a liveness problem. The
resource timeout isn't something I've seen discussed before. We do have
a stack timeout, and we need to keep on honoring that, but we can do
that with a job that sleeps for the stack timeout if we have a liveness
guarantee that will resurrect the job (with the sleep shortened by the
time since stack-update-time) somewhere else if the original engine
can't complete the job.

 [1] http://docs.openstack.org/developer/taskflow/jobs.html
 
 [2] 
 http://docs.openstack.org/developer/taskflow/examples.html#jobboard-producer-consumer-simple
 
 On Nov 13, 2014, at 12:29 AM, Murugan, Visnusaran visnusaran.muru...@hp.com 
 wrote:
 
  Hi all,
   
  Convergence-POC distributes stack operations by sending resource actions 
  over RPC for any heat-engine to execute. Entire stack lifecycle will be 
  controlled by worker/observer notifications. This distributed model has its 
  own advantages and disadvantages.
   
  Any stack operation has a timeout and a single engine will be responsible 
  for it. If that engine goes down, timeout is lost along with it. So a 
  traditional way is for other engines to recreate timeout from scratch. Also 
  a missed resource action notification will be detected only when stack 
  operation timeout happens.
   
  To overcome this, we will need the following capability:
  1.   Resource timeout (can be used for retry)
  2.   Recover from engine failure (loss of stack timeout, resource 
  action notification)
   
   
  Suggestion:
  1.   Use task queue like celery to host timeouts for both stack and 
  resource.
  2.   Poll database for engine failures and restart timers/ retrigger 
  resource retry (IMHO: This would be a traditional and weighs heavy)
  3.   Migrate heat to use TaskFlow. (Too many code change)
   
  I am not suggesting we use Task Flow. Using celery will have very minimum 
  code change. (decorate appropriate functions)
   
   
  Your thoughts.
   
  -Vishnu
  IRC: ckmvishnu
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 08:32 AM, Richard Jones wrote:
 I note that the Debian JS guidelines* only recommend that libraries
 *should* be minified (though I'm unsure why they even recommend that).

I'm not sure why. Though what *must* be done, is that source packages,
and no point, should ever include a minified version. This should be
done either at build time, or at runtime. There's already some issues
within the current XStatic packages that I had to deal with (eg: remove
these minified versions so it could be uploaded to Debian).

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Morgan Fainberg


 On Nov 12, 2014, at 14:22, Doug Hellmann d...@doughellmann.com wrote:
 
 
 On Nov 12, 2014, at 4:40 PM, Adam Young ayo...@redhat.com wrote:
 
 On 11/12/2014 02:06 PM, Doug Hellmann wrote:
 During our “Graduation Schedule” summit session we worked through the list 
 of modules remaining the in the incubator. Our notes are in the etherpad 
 [1], but as part of the Write it Down” theme for Oslo this cycle I am also 
 posting a summary of the outcome here on the mailing list for wider 
 distribution. Let me know if you remembered the outcome for any of these 
 modules differently than what I have written below.
 
 Doug
 
 
 
 Deleted or deprecated modules:
 
 funcutils.py - This was present only for python 2.6 support, but it is no 
 longer used in the applications. We are keeping it in the stable/juno 
 branch of the incubator, and removing it from master 
 (https://review.openstack.org/130092)
 
 hooks.py - This is not being used anywhere, so we are removing it. 
 (https://review.openstack.org/#/c/125781/)
 
 quota.py - A new quota management system is being created 
 (https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and 
 should replace this, so we will keep it in the incubator for now but 
 deprecate it.
 
 crypto/utils.py - We agreed to mark this as deprecated and encourage the 
 use of Barbican or cryptography.py (https://review.openstack.org/134020)
 
 cache/ - Morgan is going to be working on a new oslo.cache library as a 
 front-end for dogpile, so this is also deprecated 
 (https://review.openstack.org/134021)
 
 apiclient/ - With the SDK project picking up steam, we felt it was safe to 
 deprecate this code as well (https://review.openstack.org/134024).
 
 xmlutils.py - This module was used to provide a security fix for some XML 
 modules that have since been updated directly. It was removed. 
 (https://review.openstack.org/#/c/125021/)
 
 
 
 Graduating:
 
 oslo.context:
 - Dims is driving this
 - 
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
 - includes:
context.py
 
 oslo.service:
 - Sachi is driving this
 - 
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
 - includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py
 
 oslo.utils:
 - We need to look into how to preserve the git history as we import these 
 modules.
 - includes:
fileutils.py
versionutils.py
 You missed oslo.policy.  Graduating, and moving under the AAA program.
 
 I sure did. I thought we’d held a separate session on policy and I was going 
 to write it up separately, but now I’m not finding a link to a separate 
 etherpad. I must have been mixing that discussion up with one of the other 
 sessions.
 
 The Keystone team did agree to adopt the policy module and create a library 
 from it. I have Morgan and Adam down as volunteering to drive that process. 
 Since we’re changing owners, I’m not sure where we want to put the 
 spec/blueprint to track the work. Maybe under the keystone program, since 
 you’re doing the work?
 
Yeah putting it in keystone specs makes the most sense I think of the locations 
we have today. 

--Morgan 

 
 
 
 
 Remaining untouched:
 
 scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
 whether Gantt has enough traction yet so we will hold onto these in the 
 incubator for at least another cycle.
 
 report/ - There’s interest in creating an oslo.reports library containing 
 this code, but we haven’t had time to coordinate with Solly about doing 
 that.
 
 
 
 Other work:
 
 We will continue the work on oslo.concurrency and oslo.log that we started 
 during Juno.
 
 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Undead DB objects: ProviderFirewallRule and InstanceGroupPolicy?

2014-11-13 Thread Matthew Booth
There are 3 db apis relating to ProviderFirewallRule:
provider_fw_rule_create, provider_fw_rule_get_all, and
provider_fw_rule_destroy. Of these, only provider_fw_rule_get_all seems
to be used. i.e. It seems they can be queried, but not created.

InstanceGroupPolicy doesn't seem to be used anywhere at all.
_validate_instance_group_policy() in compute manager seems to be doing
something else.

Are these undead relics in need of a final stake through the heart, or
is something else going on here?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Miguel Ángel Ajo
I believe this fix to IPv6 dhcp spawn breaks isolated metadata when we have a 
subnet combination like this on a network:

1) IPv6 subnet, with DHCP enabled
2) IPv4 subnet, with isolated metadata enabled.


https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py  

I haven’t been able to test yet, but wanted to share it before I forget.




Miguel Ángel
ajo @ freenode.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] mixing vif drivers e1000 and virtio

2014-11-13 Thread Srini Sundararajan
Hi,
When i create an instance with more than 1 vif,  how can i pick and
choose/configure  which  driver (e1000/virtio)  i can assign ?
Many thanks
Sri
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking support

2014-11-13 Thread Armando M.
I chimed in on another thread, but I am reinstating my point just in case.

On 13 November 2014 04:38, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 A few months back we started to work on a umbrella spec for Vmware
 networking support (https://review.openstack.org/#/c/105369). There are a
 number of different proposals for a number of different use cases. In
 addition to providing one another with an update of our progress we need to
 discuss the following challenges:

- At the summit there was talk about splitting out vendor code from
the neutron code base. The aforementioned specs are not being approved
until we have decided what we as a community want/need. We need to
understand how we can continue our efforts and not be blocked or hindered
by this debate.

 The proposal of allowing vendor plugin to be in full control of their own
destiny will be submitted as any other blueprint and will be discussed as
any other community effort. In my opinion, there is no need to be blocked
on waiting whether the proposal go anywhere. Spec, code and CI being
submitted will have minimal impact irrespective of any decision reached.

So my suggestion is to keep your code current with trunk, and do your 3rd
Party CI infrastructure homework, so that when we are ready to push the
trigger there will be no further delay.


- CI updates – in order to provide a new plugin we are required to
provide CI (yes, this is written in stone and in some cases marble)
- Additional support may be required in the following:
   - Nova – for example Neutron may be exposing extensions or
   functionality that requires Nova integrations
   - Devstack – In order to get CI up and running we need devatck
   support

 As a step forwards I would like to suggest that we meeting at
 #openstack-vmware channel on Tuesday at 15:00 UTC. Is this ok with everyone?
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Ryan Brown
On 11/13/2014 09:58 AM, Clint Byrum wrote:
 Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
 On 13/11/14 03:29, Murugan, Visnusaran wrote:

 [snip]

 3.Migrate heat to use TaskFlow. (Too many code change)

 If it's just handling timed triggers (maybe this is closer to #2) and 
 not migrating the whole code base, then I don't see why it would be a 
 big change (or even a change at all - it's basically new functionality). 
 I'm not sure if TaskFlow has something like this already. If not we 
 could also look at what Mistral is doing with timed tasks and see if we 
 could spin some of it out into an Oslo library.

 
 I feel like it boils down to something running periodically checking for
 scheduled tasks that are due to run but have not run yet. I wonder if we
 can actually look at Ironic for how they do this, because Ironic polls
 power state of machines constantly, and uses a hash ring to make sure
 only one conductor is polling any one machine at a time. If we broke
 stacks up into a hash ring like that for the purpose of singleton tasks
 like timeout checking, that might work out nicely.

+1

Using a hash ring is a great way to shard tasks. I think the most
sensible way to add this would be to make timeout polling a
responsibility of the Observer instead of the engine.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Robert Li (baoli)
Nice catch. Since it’s already merged, a new bug may be in order.

—Robert

On 11/13/14, 10:25 AM, Miguel Ángel Ajo 
majop...@redhat.commailto:majop...@redhat.com wrote:

I believe this fix to IPv6 dhcp spawn breaks isolated metadata when we have a 
subnet combination like this on a network:

1) IPv6 subnet, with DHCP enabled
2) IPv4 subnet, with isolated metadata enabled.


https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py

I haven’t been able to test yet, but wanted to share it before I forget.




Miguel Ángel
ajo @ freenode.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Steve Martinelli
looking at http://specs.openstack.org/openstack/oslo-specs/ and 
http://specs.openstack.org/openstack/keystone-specs/ should have all the 
info you need. The specs are hosted at: 
https://github.com/openstack/keystone-specs there's a template spec too.

Thanks,

_
Steve Martinelli
OpenStack Development - Keystone Core Member
Phone: (905) 413-2851
E-Mail: steve...@ca.ibm.com



From:   Rodrigo Duarte rodrigodso...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   11/13/2014 10:13 AM
Subject:Re: [openstack-dev] [oslo] kilo graduation plans



Hi Doug,

I'm going to write the spec regarding the policy graduation, it will be 
placed in the keystone-specs repository. I was wondering if someone have 
examples of such specs so we can cover all necessary points.

On Thu, Nov 13, 2014 at 10:34 AM, Doug Hellmann d...@doughellmann.com 
wrote:

On Nov 13, 2014, at 8:31 AM, Dmitry Tantsur dtant...@redhat.com wrote:

 On 11/13/2014 01:54 PM, Doug Hellmann wrote:

 On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur dtant...@redhat.com 
wrote:

 On 11/12/2014 08:06 PM, Doug Hellmann wrote:
 During our ?Graduation Schedule? summit session we worked through the 
list of modules remaining the in the incubator. Our notes are in the 
etherpad [1], but as part of the Write it Down? theme for Oslo this cycle 
I am also posting a summary of the outcome here on the mailing list for 
wider distribution. Let me know if you remembered the outcome for any of 
these modules differently than what I have written below.

 Doug



 Deleted or deprecated modules:

 funcutils.py - This was present only for python 2.6 support, but it 
is no longer used in the applications. We are keeping it in the 
stable/juno branch of the incubator, and removing it from master (
https://review.openstack.org/130092)

 hooks.py - This is not being used anywhere, so we are removing it. (
https://review.openstack.org/#/c/125781/)

 quota.py - A new quota management system is being created (
https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and 
should replace this, so we will keep it in the incubator for now but 
deprecate it.

 crypto/utils.py - We agreed to mark this as deprecated and encourage 
the use of Barbican or cryptography.py (
https://review.openstack.org/134020)

 cache/ - Morgan is going to be working on a new oslo.cache library as 
a front-end for dogpile, so this is also deprecated (
https://review.openstack.org/134021)

 apiclient/ - With the SDK project picking up steam, we felt it was 
safe to deprecate this code as well (https://review.openstack.org/134024).

 xmlutils.py - This module was used to provide a security fix for some 
XML modules that have since been updated directly. It was removed. (
https://review.openstack.org/#/c/125021/)



 Graduating:

 oslo.context:
 - Dims is driving this
 - 
https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context

 - includes:
context.py

 oslo.service:
 - Sachi is driving this
 - 
https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service

 - includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
 By te way, right now I'm looking into updating this code to be able to 
run tasks on a thread pool, not only in one thread (quite a problem for 
Ironic). Does it somehow interfere with the graduation? Any deadlines or 
something?

 Feature development on code declared ready for graduation is basically 
frozen until the new library is created. You should plan on doing that 
work in the new oslo.service repository, which should be showing up soon. 
And the you describe feature sounds like something for which we would want 
a spec written, so please consider filing one when you have some of the 
details worked out.
 Sure, right now I'm experimenting in Ironic tree to figure out how it 
really works. There's a single oslo-specs repo for the whole oslo, right?

Yes, that?s right openstack/oslo-specs. Having a branch somewhere as a 
reference would be great for the spec reviewers, so that seems like a good 
way to start.

Doug




request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

 oslo.utils:
 - We need to look into how to preserve the git history as we import 
these modules.
 - includes:
fileutils.py
versionutils.py



 Remaining untouched:

 scheduler/ - Gantt probably makes this code obsolete, but it isn?t 
clear whether Gantt has enough traction yet so we will hold onto these in 
the incubator for at least another cycle.

 report/ - There?s interest in creating an oslo.reports library 
containing this code, but we haven?t had time to coordinate with Solly 
about doing that.



 Other work:

 We will continue the work on oslo.concurrency and oslo.log that we 
started during Juno.

 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
 

Re: [openstack-dev] mixing vif drivers e1000 and virtio

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 07:37:33AM -0800, Srini Sundararajan wrote:
 Hi,
 When i create an instance with more than 1 vif,  how can i pick and
 choose/configure  which  driver (e1000/virtio)  i can assign ?

The vif driver is is customizable at a per-image level using the
hw_vif_model  metadata parameter in glance.  There is no facility
for changing this per NIC.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Peeyush Gupta
+1

I agree with Lucas. Sounds like a good idea. I guess if we could spare
more time for discussing new features and requirements rather than
asking for status, that would be helpful for everyone.

On 11/13/2014 05:45 PM, Lucas Alvares Gomes wrote:
 This was discussed in the Contributor Meetup on Friday at the Summit
 but I think it's important to share on the mail list too so we can get
 more opnions/suggestions/comments about it.

 In the Ironic weekly meeting we dedicate a good time of the meeting to
 do some announcements, reporting bug status, CI status, oslo status,
 specific drivers status, etc... It's all good information, but I
 believe that the mail list would be a better place to report it and
 then we can free some time from our meeting to actually discuss
 things.

 Are you guys in favor of it?

 If so I'd like to propose a new format based on the discussions we had
 in Paris. For the people doing the status report on the meeting, they
 would start adding the status to an etherpad and then we would have a
 responsible person to get this information and send it to the mail
 list once a week.

 For the meeting itself we have a wiki page with an agenda[1] which
 everyone can edit to put the topic they want to discuss in the meeting
 there, I think that's fine and works. The only change about it would
 be that we may want freeze the agenda 2 days before the meeting so
 people can take a look at the topics that will be discussed and
 prepare for it; With that we can move forward quicker with the
 discussions because people will be familiar with the topics already.

 Let me know what you guys think.

 [1] https://wiki.openstack.org/wiki/Meetings/Ironic

 Lucas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Peeyush Gupta
gpeey...@linux.vnet.ibm.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Rodrigo Duarte
Thanks Steve.

On Thu, Nov 13, 2014 at 12:50 PM, Steve Martinelli steve...@ca.ibm.com
wrote:

 looking at http://specs.openstack.org/openstack/oslo-specs/ and
 http://specs.openstack.org/openstack/keystone-specs/ should have all the
 info you need. The specs are hosted at:
 https://github.com/openstack/keystone-specs there's a template spec too.

 Thanks,

 _
 Steve Martinelli
 OpenStack Development - Keystone Core Member
 Phone: (905) 413-2851
 E-Mail: steve...@ca.ibm.com



 From:Rodrigo Duarte rodrigodso...@gmail.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date:11/13/2014 10:13 AM
 Subject:Re: [openstack-dev] [oslo] kilo graduation plans
 --



 Hi Doug,

 I'm going to write the spec regarding the policy graduation, it will be
 placed in the keystone-specs repository. I was wondering if someone have
 examples of such specs so we can cover all necessary points.

 On Thu, Nov 13, 2014 at 10:34 AM, Doug Hellmann *d...@doughellmann.com*
 d...@doughellmann.com wrote:

 On Nov 13, 2014, at 8:31 AM, Dmitry Tantsur *dtant...@redhat.com*
 dtant...@redhat.com wrote:

  On 11/13/2014 01:54 PM, Doug Hellmann wrote:
 
  On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur *dtant...@redhat.com*
 dtant...@redhat.com wrote:
 
  On 11/12/2014 08:06 PM, Doug Hellmann wrote:
  During our “Graduation Schedule” summit session we worked through the
 list of modules remaining the in the incubator. Our notes are in the
 etherpad [1], but as part of the Write it Down” theme for Oslo this cycle
 I am also posting a summary of the outcome here on the mailing list for
 wider distribution. Let me know if you remembered the outcome for any of
 these modules differently than what I have written below.
 
  Doug
 
 
 
  Deleted or deprecated modules:
 
  funcutils.py - This was present only for python 2.6 support, but it
 is no longer used in the applications. We are keeping it in the stable/juno
 branch of the incubator, and removing it from master (
 *https://review.openstack.org/130092*
 https://review.openstack.org/130092)
 
  hooks.py - This is not being used anywhere, so we are removing it. (
 *https://review.openstack.org/#/c/125781/*
 https://review.openstack.org/#/c/125781/)
 
  quota.py - A new quota management system is being created (
 *https://etherpad.openstack.org/p/kilo-oslo-common-quota-library*
 https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and
 should replace this, so we will keep it in the incubator for now but
 deprecate it.
 
  crypto/utils.py - We agreed to mark this as deprecated and encourage
 the use of Barbican or cryptography.py (
 *https://review.openstack.org/134020*
 https://review.openstack.org/134020)
 
  cache/ - Morgan is going to be working on a new oslo.cache library as
 a front-end for dogpile, so this is also deprecated (
 *https://review.openstack.org/134021*
 https://review.openstack.org/134021)
 
  apiclient/ - With the SDK project picking up steam, we felt it was
 safe to deprecate this code as well (*https://review.openstack.org/134024*
 https://review.openstack.org/134024).
 
  xmlutils.py - This module was used to provide a security fix for some
 XML modules that have since been updated directly. It was removed. (
 *https://review.openstack.org/#/c/125021/*
 https://review.openstack.org/#/c/125021/)
 
 
 
  Graduating:
 
  oslo.context:
  - Dims is driving this
  -
 *https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context*
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
  - includes:
 context.py
 
  oslo.service:
  - Sachi is driving this
  -
 *https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service*
 https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
  - includes:
 eventlet_backdoor.py
 loopingcall.py
 periodic_task.py
  By te way, right now I'm looking into updating this code to be able to
 run tasks on a thread pool, not only in one thread (quite a problem for
 Ironic). Does it somehow interfere with the graduation? Any deadlines or
 something?
 
  Feature development on code declared ready for graduation is basically
 frozen until the new library is created. You should plan on doing that work
 in the new oslo.service repository, which should be showing up soon. And
 the you describe feature sounds like something for which we would want a
 spec written, so please consider filing one when you have some of the
 details worked out.
  Sure, right now I'm experimenting in Ironic tree to figure out how it
 really works. There's a single oslo-specs repo for the whole oslo, right?

 Yes, that’s right openstack/oslo-specs. Having a branch somewhere as a
 reference would be great for the spec reviewers, so that seems like a good
 way to start.

 Doug

 
 
 
 request_utils.py
 service.py
 sslutils.py
 

[openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-11-13 Thread Don Kehn
If this shows up twice sorry for the repeat:

Armando, Carl:
During the Summit, Armando and I had a very quick conversation concern a
blue print that I submitted,
https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration and
Armando had mention the possibility of getting together a sub-group tasked
with DHCP Neutron concerns. I have talk with Infoblox folks (see
https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), and everyone
seems to be in agreement that there is synergy especially concerning the
development of a relay and potentially looking into how DHCP is handled. In
addition during the Fridays meetup session on DHCP that I gave there seems
to be some general interest by some of the operators as well.

So what would be the formality in going forth to start a sub-group and
getting this underway?

DeKehn
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Ghe Rivero
I agree that a lot of time is missed with the announcement and status
reports, but mostly because irc is a slow bandwidth communication channel
(like waiting several minutes for a 3 lines announcement to be written)

I propose that any announcement and project status must be written in
advanced to an etherpad, and during the irc meeting just have a slot for
people to discuss anything that need further explanation, only mentioning
the topic but not  the content.

Ghe Rivero
On Nov 13, 2014 5:08 PM, Peeyush Gupta gpeey...@linux.vnet.ibm.com
wrote:

 +1

 I agree with Lucas. Sounds like a good idea. I guess if we could spare
 more time for discussing new features and requirements rather than
 asking for status, that would be helpful for everyone.

 On 11/13/2014 05:45 PM, Lucas Alvares Gomes wrote:
  This was discussed in the Contributor Meetup on Friday at the Summit
  but I think it's important to share on the mail list too so we can get
  more opnions/suggestions/comments about it.
 
  In the Ironic weekly meeting we dedicate a good time of the meeting to
  do some announcements, reporting bug status, CI status, oslo status,
  specific drivers status, etc... It's all good information, but I
  believe that the mail list would be a better place to report it and
  then we can free some time from our meeting to actually discuss
  things.
 
  Are you guys in favor of it?
 
  If so I'd like to propose a new format based on the discussions we had
  in Paris. For the people doing the status report on the meeting, they
  would start adding the status to an etherpad and then we would have a
  responsible person to get this information and send it to the mail
  list once a week.
 
  For the meeting itself we have a wiki page with an agenda[1] which
  everyone can edit to put the topic they want to discuss in the meeting
  there, I think that's fine and works. The only change about it would
  be that we may want freeze the agenda 2 days before the meeting so
  people can take a look at the topics that will be discussed and
  prepare for it; With that we can move forward quicker with the
  discussions because people will be familiar with the topics already.
 
  Let me know what you guys think.
 
  [1] https://wiki.openstack.org/wiki/Meetings/Ironic
 
  Lucas
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 --
 Peeyush Gupta
 gpeey...@linux.vnet.ibm.com



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Jiri Tomasek

On 11/13/2014 04:04 PM, Thomas Goirand wrote:

On 11/13/2014 12:13 PM, Richard Jones wrote:

the npm stuff is all tool chain; tools
that I believe should be packaged as such by packagers.

npm is already in Debian:
https://packages.debian.org/sid/npm

However, just like we can't use CPAN, pear install, pip install and
such when building or installing package, we wont be able to use NPM.
This means every single dependency that isn't in Debian will need to be
packaged.


Horizon is an incredibly complex application. Just so we're all on the
same page, the components installed by bower for angboard are:

angular
   Because writing an application the size of Horizon without it would be
madness :)
angular-route
   Provides structure to the application through URL routing.
angular-cookies
   Provides management of browser cookies in a way that integrates well
with angular.
angular-sanitize
   Allows direct embedding of HTML into angular templates, with sanitization.
json3
   Compatibility for older browsers so JSON works.
es5-shim
   Compatibility for older browsers so Javascript (ECMAScript 5) works.
angular-smart-table
   Table management (population, sorting, filtering, pagination, etc)
angular-local-storage
Browser local storage with cookie fallback, integrated with angular
mechanisms.
angular-bootstrap
Extensions to angular that leverage bootstrap (modal popups, tabbed
displays, ...)
font-awesome
Additional glyphs to use in the user interface (warning symbol, info
symbol, ...)
boot
Bootstrap for CSS styling (this is the dependency that brings in
jquery and requirejs)
underscore
Javascript utility library providing a ton of features Javascript
lacks but Python programmers expect.
ng-websocket
Angular-friendly interface to using websockets
angular-translate
Support for localization in angular using message catalogs generated
by gettext/transifex.
angular-mocks
Mocking support for unit testing angular code
angular-scenario
More support for angular unit tests

Additionally, angboard vendors term.js because it was very poorly
packaged in the bower ecosystem. +1 for xstatic there I guess :)

So those are the components we needed to create the prototype in a few
weeks. Not using them would have added months (or possibly years) to the
development time. Creating an application of the scale of Horizon
without leveraging all that existing work would be like developing
OpenStack while barring all use of Python 3rd-party packages.

I have no problem with adding dependencies. That's how things work, for
sure, I just want to make sure it doesn't become hell, with so many
components inter-depending on 100s of them, which would become not
manageable. If we define clear boundaries, then fine! The above seems
reasonable anyway.

Though did you list the dependencies of the above?

Also, if the Horizon project starts using something like NPM (which
again, is already available in Debian, so it has my preference), will we
at least be able to control what version gets in, just like with pip?
Because that's a huge concern for me, and this has been very well and
carefully addressed during the Juno cycle. I would very much appreciate
if the same kind of care was taken again during the Kilo cycle, whatever
path we take. How do I use npm by the way? Any pointer?


NPM and Bower work the similar way as pip, they maintain similar files 
as requirements.txt that list dependencies and it's versions.
I think we should bring up patch that introduces this toolset so we can 
discuss the real amount of dependencies and the process.
It would be also nice to introduce something similar as 
global-requirements.txt in OpenStack project to make sure we have all 
deps in one place and get some approval process on versions used.


Here is an example of random Angular application's package.json (used by 
NPM) and bower.json (used by Bower) files:

http://fpaste.org/150513/89599214/

I'll try to search for a good article that describes how this ecosystem 
works.




Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Thomas Goirand z...@debian.org writes:

 Also, if the Horizon project starts using something like NPM (which
 again, is already available in Debian, so it has my preference), will we
 at least be able to control what version gets in, just like with pip?

Yes, npm similarly to pip in that you can specify the versions you want
to install. You can expect loose versions (like 1.2.x if you're okay
with gettting a random patch version) or you can specify the full
version.

In parallel with that, you can add a shrinkwrap file which lists the
versions to install recursively. This locks down the versions of
indirect dependencies too (one of your dependencies might otherwise
depend on a loose version number).

 Because that's a huge concern for me, and this has been very well and
 carefully addressed during the Juno cycle. I would very much appreciate
 if the same kind of care was taken again during the Kilo cycle, whatever
 path we take. How do I use npm by the way? Any pointer?

After installing it, you can try running 'npm install eslint'. That will
create a node_modules folder in your current working directory and
install ESLint inside it. It will also create a cache in ~/.npm.

The ESLint executable is now

  node_modules/.bin/eslint

You'll notice that npm creates

  node_modules/eslint/node_modules/

and install the ESLint dependencies there. Try removing node_modules,
then install one of the dependencies first followed by ESLint:

  rm -r node_modules
  npm install object-assign eslint

This will put both object-assign and eslint at the top of node_modules
and object-assign is no longer in node_modules/eslint/node_modules/.

This works because require('object-assign') in NodeJS will search up the
directory tree until it finds the module. So the ESLint code can still
use object-assign.

You can run 'npm dedupe' to move modules up the tree and de-duplicate
the install somewhat.

This nested module system also works the other way: if you run 'npm
install bower' after installing ESLint, you end up with two versions of
object-assign -- check 'npm list object-assign' for a dependency graph.

Surprisingly and unlike, say, Python, executing

  require('object-assign')

can give you different modules depending on where the code lives that
execute the statement. This allows different parts of Bower to use
different versions of object-assign. This is seen as a feature in this
world... I fear that it can cause strange problems and bugs when data
travels from one part of the program to another.

So, the philosophy behind this is very different from what we're used to
with system-level package managers (focus on local installs) and even
From what we have in the Python world with pip (multiple versions
installed concurrently).

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgp6u4TvkQL2G.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Policy file not reloaded after changes

2014-11-13 Thread Nikhil Komawar
Hi Ajaya,

We'r making some progress on sync-ing the latest Oslo-incubator code in Glance. 
It's a little more tricky due to the property protection feature so, we've had 
some impedance. Please give your feedback at: 
https://review.openstack.org/#/c/127923/3

Please let me know if you've any concerns.

Thanks,
-Nikhil

From: Ajaya Agrawal [ajku@gmail.com]
Sent: Thursday, November 13, 2014 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [glance] Policy file not reloaded after changes

Hi All,

The policy file is not reloaded in glance after a change is made to it. You 
need to restart glance to load the new policy file. I think all other 
components reload the policy file after a change is made to it. Is it a bug or 
intended behavior?

Cheers,
Ajaya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Chris K
+1
I think the best use of our time is to discuss new features and functions
that may have a api or functional impact for ironic or projects that depend
on ironic.

Chris Krelle

On Thu, Nov 13, 2014 at 8:22 AM, Ghe Rivero g...@debian.org wrote:

 I agree that a lot of time is missed with the announcement and status
 reports, but mostly because irc is a slow bandwidth communication channel
 (like waiting several minutes for a 3 lines announcement to be written)

 I propose that any announcement and project status must be written in
 advanced to an etherpad, and during the irc meeting just have a slot for
 people to discuss anything that need further explanation, only mentioning
 the topic but not  the content.

 Ghe Rivero
 On Nov 13, 2014 5:08 PM, Peeyush Gupta gpeey...@linux.vnet.ibm.com
 wrote:

 +1

 I agree with Lucas. Sounds like a good idea. I guess if we could spare
 more time for discussing new features and requirements rather than
 asking for status, that would be helpful for everyone.

 On 11/13/2014 05:45 PM, Lucas Alvares Gomes wrote:
  This was discussed in the Contributor Meetup on Friday at the Summit
  but I think it's important to share on the mail list too so we can get
  more opnions/suggestions/comments about it.
 
  In the Ironic weekly meeting we dedicate a good time of the meeting to
  do some announcements, reporting bug status, CI status, oslo status,
  specific drivers status, etc... It's all good information, but I
  believe that the mail list would be a better place to report it and
  then we can free some time from our meeting to actually discuss
  things.
 
  Are you guys in favor of it?
 
  If so I'd like to propose a new format based on the discussions we had
  in Paris. For the people doing the status report on the meeting, they
  would start adding the status to an etherpad and then we would have a
  responsible person to get this information and send it to the mail
  list once a week.
 
  For the meeting itself we have a wiki page with an agenda[1] which
  everyone can edit to put the topic they want to discuss in the meeting
  there, I think that's fine and works. The only change about it would
  be that we may want freeze the agenda 2 days before the meeting so
  people can take a look at the topics that will be discussed and
  prepare for it; With that we can move forward quicker with the
  discussions because people will be familiar with the topics already.
 
  Let me know what you guys think.
 
  [1] https://wiki.openstack.org/wiki/Meetings/Ironic
 
  Lucas
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 --
 Peeyush Gupta
 gpeey...@linux.vnet.ibm.com



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Matthias Runge mru...@redhat.com writes:

 On 13/11/14 15:56, Martin Geisler wrote:

 Maybe a silly question, but why insist on this? Why would you insist on
 installing a JavaScript based application using your package manager?
 
 I'm a huge fan of package managers and typically refuse to install
 anything globally if it doesn't come as a package.
 
 However, the whole JavaScript ecosystem seems to be centered around the
 idea of doing local installations. That means that you no longer need
 the package manager to install the software -- you only need a package
 manager to install the base system (NodeJs and npm for JavaScript).
 Yeah, I understand you.

Let me just add that this shift has been a very recent change for me.
With anything but Python and JavaScript, I use my system-level package
manager.

 But: doing local installs or: installing things aside a package
 manager means, that software is not maintained, or properly updated
 any more. I'm a huge fan of not bundling stuff and re-using libraries
 from a central location. Copying foreign code to your own codebase is
 quite popular in JavaScript world. That doesn't mean, it's the right
 thing to do.

I agree that you don't want to copy third-party libraries into your
code. In some sense, that's not what the JavaScript world is doing, at
least not before install time.

What I mean is: the ease of use of local package managers has lead to an
explosion in the number of tiny packages. So JS projects will no longer
copy dependencies into their own project (into their version control
system). They will instead depend on them using a package manager such
as npm or bower.


It seems to me that it should be possible translate the node module into
system level packages in a mechanical fashion, assuming that you're
willing to have a system package for each version of the node module
(you'll need multiple system packages since it's very likely that you'll
end up using multiple different versions at the same time --
alternatively, you could let each system package install every published
or popular node module version).

The guys behind npm has written a little about how that could work here:

  http://nodejs.org/api/modules.html#modules_addenda_package_manager_tips

Has anyone written such wrapper packages? Not the xstatic system which
seems to incur a porting effort -- but really a wrapper system that can
translate any node module into a system package.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpvFpbyk_SbN.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] security and swift multi-tenant fixes on stable branch

2014-11-13 Thread stuart . mclaren

All,

The 0.1.9 version of glance_store, and glance's master branch both
contain some fixes for the Swift multi-tenant store.

This security related change hasn't merged to glance_store yet:
https://review.openstack.org/130200

I'd like to suggest that we try to merge this security fix and release
it as as glance_store '0.1.10'. Then make glance's juno/stable branch
rely on glance_store '0.1.10' so that it picks up both the multi-tenant store
and security fixes.

The set of related glance stable branch patches would be:
https://review.openstack.org/134257
https://review.openstack.org/134286
https://review.openstack.org/134289/ (0.1.10 dependency -- also requires a 
global requirements change)

Does this seem ok?

-Stuart

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] security and swift multi-tenant fixes on stable branch

2014-11-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 13/11/14 18:17, stuart.mcla...@hp.com wrote:
 All,
 
 The 0.1.9 version of glance_store, and glance's master branch both 
 contain some fixes for the Swift multi-tenant store.
 
 This security related change hasn't merged to glance_store yet: 
 https://review.openstack.org/130200
 
 I'd like to suggest that we try to merge this security fix and
 release it as as glance_store '0.1.10'. Then make glance's
 juno/stable branch rely on glance_store '0.1.10' so that it picks
 up both the multi-tenant store and security fixes.

So you're forcing all stable branch users to upgrade their
glance_store module, with a version that includes featureful patches,
which is not nice.

I think those who maintain glance_store module in downstream
distributions will cherry-pick the security fix into their packages,
so there is nothing to do in terms of stable branches to handle the
security issue.

Objections?

 
 The set of related glance stable branch patches would be: 
 https://review.openstack.org/134257 
 https://review.openstack.org/134286 
 https://review.openstack.org/134289/ (0.1.10 dependency -- also
 requires a global requirements change)
 
 Does this seem ok?
 
 -Stuart
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUZOouAAoJEC5aWaUY1u57aFMIAM2uhUPOLfBqNneKO89Kv3tU
uE5+JP3Oh7pSCwCgw+fgnxraG9jb5QjpV8rCHewvFpyWQKwsstmNjdMeryRIX1Hn
TZ42mSFUWkjDBJ/cvP2QyLXt2Il93xtqaAcLxo9enHUBR4F2lUCaZK0sm8jLkIFf
TYv9jaf5QwjIWD7VO51HibwoH4f2laJv4r8MbIuyQoUpMlKpeWzmETqm5NrIUCp+
Acvbxo0EaRgAhWRIfHmFtudVjeirjc6vG9yjxFwaObYODb3sridcnr5IOBwP8jrI
1WExsAPTMU6ut2j2pABxIc0PnYAcW1uzc8w4/oPMUp0rZsaQfveCH/mRA0QnqrQ=
=j14y
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Dan Smith
 Yep, it is possible to run the tests inside VMs - the key is that when
 you create the VMs you need to be able to give them NUMA topology. This
 is possible if you're creating your VMs using virt-install, but not if
 you're creating your VMs in a cloud.

I think we should explore this a bit more. AFAIK, we can simulate a NUMA
system with CONFIG_NUMA_EMU=y and providing numa=fake=XXX to the guest
kernel. From a quick check with some RAX folks, we should have enough
control to arrange this. Since we can put a custom kernel (and
parameters) into our GRUB configuration that pygrub should honor, I
would think we could get a fake-NUMA guest running in at least one
public cloud. Since HP's cloud runs KVM, I would assume we have control
over our kernel and boot there as well.

Is there something I'm missing about why that's not doable?

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Policy file not reloaded after changes

2014-11-13 Thread Nikhil Komawar
Forgot to mention the main part - this patch should enable the auto loading of 
policies.

Thanks,
-Nikhil

From: Nikhil Komawar [nikhil.koma...@rackspace.com]
Sent: Thursday, November 13, 2014 11:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Policy file not reloaded after changes

Hi Ajaya,

We'r making some progress on sync-ing the latest Oslo-incubator code in Glance. 
It's a little more tricky due to the property protection feature so, we've had 
some impedance. Please give your feedback at: 
https://review.openstack.org/#/c/127923/3

Please let me know if you've any concerns.

Thanks,
-Nikhil

From: Ajaya Agrawal [ajku@gmail.com]
Sent: Thursday, November 13, 2014 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [glance] Policy file not reloaded after changes

Hi All,

The policy file is not reloaded in glance after a change is made to it. You 
need to restart glance to load the new policy file. I think all other 
components reload the policy file after a change is made to it. Is it a bug or 
intended behavior?

Cheers,
Ajaya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 09:28:01AM -0800, Dan Smith wrote:
  Yep, it is possible to run the tests inside VMs - the key is that when
  you create the VMs you need to be able to give them NUMA topology. This
  is possible if you're creating your VMs using virt-install, but not if
  you're creating your VMs in a cloud.
 
 I think we should explore this a bit more. AFAIK, we can simulate a NUMA
 system with CONFIG_NUMA_EMU=y and providing numa=fake=XXX to the guest
 kernel. From a quick check with some RAX folks, we should have enough
 control to arrange this. Since we can put a custom kernel (and
 parameters) into our GRUB configuration that pygrub should honor, I
 would think we could get a fake-NUMA guest running in at least one
 public cloud. Since HP's cloud runs KVM, I would assume we have control
 over our kernel and boot there as well.
 
 Is there something I'm missing about why that's not doable?

That sounds like something worth exploring at least, I didn't know about
that kernel build option until now :-) It sounds like it ought to be enough
to let us test the NUMA topology handling, CPU pinning and probably huge
pages too. The main gap I'd see is NUMA aware PCI device assignment
since the PCI - NUMA node mapping data comes from the BIOS and it does
not look like this is fakeable as is.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Dan Smith
 That sounds like something worth exploring at least, I didn't know
 about that kernel build option until now :-) It sounds like it ought
 to be enough to let us test the NUMA topology handling, CPU pinning
 and probably huge pages too.

Okay. I've been vaguely referring to this as a potential test vector,
but only just now looked up the details. That's my bad :)

 The main gap I'd see is NUMA aware PCI device assignment since the
 PCI - NUMA node mapping data comes from the BIOS and it does not
 look like this is fakeable as is.

Yeah, although I'd expect that the data is parsed and returned by a
library or utility that may be a hook for fakeification. However, it may
very well be more trouble than it's worth.

I still feel like we should be able to test generic PCI in a similar way
(passing something like a USB controller through to the guest, etc).
However, I'm willing to believe that the intersection of PCI and NUMA is
a higher order complication :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 10:56 PM, Martin Geisler wrote:
 Maybe a silly question, but why insist on this? Why would you insist on
 installing a JavaScript based application using your package manager?
 
 I'm a huge fan of package managers and typically refuse to install
 anything globally if it doesn't come as a package.
 
 However, the whole JavaScript ecosystem seems to be centered around the
 idea of doing local installations. That means that you no longer need
 the package manager to install the software -- you only need a package
 manager to install the base system (NodeJs and npm for JavaScript).

Yeah... Just like for Java, PHP, Perl, Python, you-name-it...

In what way Javascript will be any different from all of these languages?

 Notice that Python has been moving rapidly in the same direction for
 years: you only need Python and pip to bootstrap yourself. After getting
 used to virtualenv, I've mostly stopped installing Python modules
 globally and that is how the JavaScript world expects you to work too.

Fine for development. Not for deployments. Not for distributions. Or you
just get a huge mess of every library installed 10 times, with 10
different versions, and then a security issue needs to be fixed...

 So maybe the Horizon package should be an installer package like the
 ones that download fonts or Adobe?

This is a horrible design which will *never* make it to distributions.
Please think again. What is it that makes Horizon so special? Answer:
nothing. It's just a web app, so it doesn't need any special care. It
should be packaged, just like the rest of everything, with .deb/.rpm and
so on.

 That package would get the right version of node and which then runs the
 npm and bower commands to download the rest plus (importantly and much
 appreciated) puts the files in a sensible location and gives them good
 permissions.

Fine for your development environment. But that's it.

Also, does your $language-specific-package--manager has enough checks so
that there's no man in the middle attack possible? Is it secured enough?
Can a replay attack be done on it? Does it supports any kind of
cryptography checks like yum or apt does? I'm almost sure that's not the
case. pip is really horrible in this regard. I haven't checked, but I'm
almost sure what we're proposing (eg: npm and such) have the same
weakness. And here, I'm only scratching security concerns. There's other
concerns, like how good is the dependency solver and such (remember: it
took *years* for apt to be as good as it is right now, and it still has
some defects).

On 11/14/2014 12:59 AM, Martin Geisler wrote:
 It seems to me that it should be possible translate the node module
 into system level packages in a mechanical fashion, assuming that
 you're willing to have a system package for each version of the node
 module

Sure! That's how I do most of my Python modules these days. I don't just
create them from scratch, I use my own debpypi script, which generates
a template for packaging. But it can't be fully automated. I could
almost do it in a fully automated manner for PEAR packages for PHP (see
debpear in the Debian archive), but it's harder with Python and pip/PyPi.

Stuff like debian/copyright files have to be processed by hand, and each
package is different (How to run unit tests? nose, testr, pytest? Does
it support python3? Is there a sphinx doc? How good is upstream short
and long description?). I guess it's going to be the same for Javascript
packages: it will be possible to do automation for some parts, but
manual work will always be needed.

On 11/14/2014 12:59 AM, Martin Geisler wrote:
 The guys behind npm has written a little about how that could work
 here:

 http://nodejs.org/api/modules.html#modules_addenda_package_manager_tips

It's fun to read, but very naive. First thing that is very shocking is
that arch independent things gets installed into /usr/lib, where they
belong in /usr/share. If that is what the NPM upstream produces, that's
scary: he doesn't even know how the FSHS (FileSystem Hierarchy Standard)
works.

 Has anyone written such wrapper packages? Not the xstatic system which
 seems to incur a porting effort -- but really a wrapper system that
 can translate any node module into a system package.

The xstatic packages are quite painless, from my view point. What's
painful is to link an existing xstatic package with an already existing
libjs-* package that may have a completely different directory
structure. You can then end-up with a forest of symlinks, but there's no
way around that. No wrapper can solve that problem either. And more
generally, a wrapper that writes a $distribution source package out of a
$language-specific package manager will never solve all, it will only
reduce the amount of packaging work.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] alpha version numbering discussion from summit

2014-11-13 Thread Jeremy Stanley
On 2014-11-13 07:50:51 -0500 (-0500), Doug Hellmann wrote:
[...]
 I do remember a comment at some point, and I’m not sure it was in
 this session, about using the per-project client libraries as
 “internal only” libraries when the new SDK matures enough that we
 can declare that the official external client library. That might
 solve the problem, since we could pin the version of the client
 libraries used, but it seems like a solution for the future rather
 than for this cycle.
[...]

Many of us have suggested this as a possible way out of the tangle
in the past, though Monty was the one who raised it during that
session. Basically the problem we have boils down to wanting to use
these libraries as a stable internal communication mechanism within
components of an OpenStack environment but also be able to support
tenant users and application developers interacting with a broad
variety of OpenStack releases through them, and that is a mostly
unreconcilable difference. Having a user-facing SDK which talks to
OpenStack APIs with broad version support, and a separate set of
per-project communication libraries which can follow the integrated
release cadence and maintain stable backport branches as needed,
makes the problem much more tractable in the long term.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
  That sounds like something worth exploring at least, I didn't know
  about that kernel build option until now :-) It sounds like it ought
  to be enough to let us test the NUMA topology handling, CPU pinning
  and probably huge pages too.
 
 Okay. I've been vaguely referring to this as a potential test vector,
 but only just now looked up the details. That's my bad :)
 
  The main gap I'd see is NUMA aware PCI device assignment since the
  PCI - NUMA node mapping data comes from the BIOS and it does not
  look like this is fakeable as is.
 
 Yeah, although I'd expect that the data is parsed and returned by a
 library or utility that may be a hook for fakeification. However, it may
 very well be more trouble than it's worth.
 
 I still feel like we should be able to test generic PCI in a similar way
 (passing something like a USB controller through to the guest, etc).
 However, I'm willing to believe that the intersection of PCI and NUMA is
 a higher order complication :)

Oh I forgot to mention with PCI device assignment (as well as having a
bunch of PCI devices available[1]), the key requirement is an IOMMU.
AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
out of luck for even basic PCI assignment testing inside VMs.

Regards,
Daniel

[1] Devices which provide function level reset or PM reset capabilities,
as bus level reset is too painful to deal with, requiring co-assignment
of all devices on the same bus to the same guest.
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Fwd: Re: [Openstack-stable-maint] Neutron backports for security group performance

2014-11-13 Thread Kevin Benton
Ok. Thanks again for doing that.

On Thu, Nov 13, 2014 at 5:06 AM, James Page james.p...@ubuntu.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 12/11/14 17:43, Kevin Benton wrote:
  This is awesome. I seem to have misplaced my 540-node cluster. ;-)
 
  Is it possible for you to also patch in
  https://review.openstack.org/#/c/132372/ ? In my rally testing of
  port retrieval, this one probably made the most significant
  improvement.

 Unfortunately not - our lab time on the infrastructure ended last week
 and I had to (reluctantly) give everything back to HP.

 That said, looking through all of the patches I applied to neutron, I
  had that one in place as well - apologies for missing that
 information in my first email!.

 Regards

 James

 - --
 James Page
 Ubuntu and Debian Developer
 james.p...@ubuntu.com
 jamesp...@debian.org
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQIcBAEBCAAGBQJUZKy/AAoJEL/srsug59jDGa8QANJjKl8fyCmoE0FNZ0/xXnq0
 qYu8u0yYm1SPya09KQaSmMUkMACjgiemjEKD/lICQASd/ROPMMRoqmbfiogDzDLZ
 Si4U4CsYYy+EVnXQ3ozOopxbZHKNjjbTFBhNNvVeEQ1/sZpTHEdI6emwXlOuj6qP
 Z36RmJpr1rQDhvvccywytVI2a42MbUnT53yjI4AKIc5TQBdPOW6QIr89sNNZM+jp
 frNl40tCFo/SQU2TR3mmBXdXWYT5BAdNyAHBz/7TUNzSt5ZUXBSr/3lE2Vj69aZ6
 ioMBwreeW+hV2NXYjLCpCAOsam7lz3qZjOC5DtZj4OrIy+J8ts73uHvPe2y0Gxr/
 ANrbxPeRPp1uXAT4UPUqQZ4m2vYQVVwenc8cPQtzcXrJ9CF9ti8NrFnATtqdSf3a
 2kWyKmJ1qd+6tValdImTFc/J7Vw/WPkTvoYXGAfszL6j0Ea6JGCvGCCvDOFZwG3o
 NWGBaIVCAErlypDaqxQGfiUtsGWIrFfy52ufJ+YEc0L/pIq9ZUlrHE17LkUz2gC2
 GTUbLYQ8+S+/b5suYzbthA+SHgc+Xzfzh+K+sCirEFzNaAhzJySvr7ssCRoKvs0d
 QDoLaSGdwNDKjW/Y7O/eGHD1bz6RVfMxvky+pa8GZBHIp/YhEuBSNU3CNNEAt6El
 /rWfIhMsjPtHlhHF245x
 =Bnsb
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] opnfv proposal on DR capability enhancement on OpenStack Nova

2014-11-13 Thread A, Keshava
Zhipeng Huang,

When multiple  Datacenters are interconnected over WAN/Internet if the remote 
the Datacenter goes down, expect the 'native VM status' to get changed 
accordingly ?
Is this the requirement ? This requirement is  from NFV Service VM (like 
routing VM ? )
Then is not it is  NFV routing (BGP/IGP) /MPLS signaling (LDP/RSVP) protocol to 
handle  ? Does the OpenStack needs to handle that ?

Please correct me if my understanding on this problem  is not correct.

Thanks  regards,
keshava

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Wednesday, November 12, 2014 6:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][DR][NFV] opnfv proposal on DR capability 
enhancement on OpenStack Nova

- Original Message -
 From: Zhipeng Huang zhipengh...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 Hi Team,
 
 I knew we didn't propose this in the design summit and it is kinda 
 rude in this way to jam a topic into the schedule. We were really 
 stretched thin during the summit and didn't make it to the Nova 
 discussion. Full apologies here :)
 
 What we want to discuss here is that we proposed a project in opnfv ( 
 https://wiki.opnfv.org/collaborative_development_projects/rescuer), 
 which in fact is to enhance inter-DC DR capabilities in Nova. We hope 
 we could achieve this in the K cycle, since there is no HUGE changes 
 required to be done in Nova. We just propose to add certain DR status 
 in Nova so operators could see what DR state the OpenStack is 
 currently in, therefore when disaster occurs they won't cut off the wrong 
 stuff.
 
 Sorry again if we kinda barge in here, and we sincerely hope the Nova 
 community could take a look at our proposal. Feel free to contact me 
 if anyone got any questions :)
 
 --
 Zhipeng Huang

Hi Zhipeng,

I would just like to echo the comments from the opnfv-tech-discuss list (which 
I notice is still private?) in saying that there is very little detail on the 
wiki page describing what you actually intend to do. Given this, it's very hard 
to provide any meaningful feedback. A lot more detail is required, particularly 
if you intend to propose a specification based on this idea.

Thanks,

Steve

[1] https://wiki.opnfv.org/collaborative_development_projects/rescuer


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >