Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-09 Thread Joshua Harlow
+2 from me,

More mongodb adoption (as stated) when it's questionable legally doesn't seem 
like a good long term strategy (I know it will/does impact yahoo adopting or 
using ceilometer...). Is this another one of those tactical changes that we 
keep on making that ends up being yet another piece of technical debt that 
someone will have to cleanup :-/

If we thought a little more about this strategically maybe we would end up in a 
better place short term *and* long term??

Sent from my really tiny device...

On Aug 9, 2014, at 8:59 AM, Devananda van der Veen 
devananda@gmail.commailto:devananda@gmail.com wrote:


On Aug 9, 2014 4:22 AM, Eoghan Glynn 
egl...@redhat.commailto:egl...@redhat.com wrote:


 Hi Folks,

 Dina Belova has recently landed some infra patches[1,2] to create
 an experimental mongodb-based Tempest job. This effectively just
 overrides the ceilometer storage backend config so that mongodb
 is used instead of sql-alchemy. The new job has been running
 happily for a few days so I'd like now to consider the path
 forwards with this.

 One of our Juno goals under the TC gap analysis was to more fully
 gate against mongodb, given that this is the storage backend
 recommended/supported by many distros. The sql-alchemy backend,
 on the other hand, is more suited for proofs of concept or small
 deployments. However up to now we've been hampered from reflecting
 that reality in the gate, due to the gate being stuck on Precise
 for a long time, as befits LTS, and the version of mongodb needed
 by ceilometer (i.e. 2.4) effectively unavailable on that Ubuntu
 release (in fact it was limited to 2.0.4).

 So the orientation towards gating on sql-alchemy was mostly
 driven by legacy issues in the gate's usage of Precise, as
 opposed to this being considered the most logical basket in
 which to put all our testing eggs.

 However, we're now finally in the brave new world of Trusty :)
 So I would like to make the long-delayed change over soon.

 This would involve transposing the roles of sql-alchemy and
 mongodb in the gate - the mongodb variant becomes the blessed
 job run by default, whereas the sql-alchemy based job to
 relegated to the second tier.

 So my questions are:

  (a) would the QA side of the house be agreeable to this switch?

 and:

  (b) how long would the mongodb job need to be stable in this
  experimental mode before we pull the trigger on swicthing?

 If the answer to (a) is yes, we can get infra patches proposed
 early next week to make the swap.

 Cheers,
 Eoghan

 [1] 
 https://review.openstack.org/#/q/project:openstack-infra/config+branch:master+topic:ceilometer-mongodb-job,n,z
 [2] 
 https://review.openstack.org/#/q/project:openstack-infra/devstack-gate+branch:master+topic:ceilometer-backend,n,z


My interpretation of the gap analysis [1] is merely that you have coverage, not 
that you switch to it and relegate the SQLAlchemy tests to second chair. I 
believe that's a dangerous departure from current standards. A dependency on 
mongodb, due to it's AGPL license, and lack of sufficient support for a 
non-AGPL storage back end, has consistently been raised as a blocking issue for 
Marconi. [2]

-Deva

[1] 
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Ceilometer_Gap_Coverage

[2] http://lists.openstack.org/pipermail/openstack-dev/2014-March/030510.html 
is a very articulate example of this objection

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-09 Thread Joshua Harlow
Agreed, testing options is good; and should likely be disjoint from the legal 
questions around mongodb...

Although if there is really only one viable  scalable option and that option 
has legal usage questions surrounding it then it makes me wonder how much we 
are kidding ourselves on there being anything optional about this... Not 
something I can answer but someone likely should?.?.

I guess it really depends on what the desired outcome of testing with mongodb 
is, if the outcome is to satisfy a TC requirement for improved testing via 
*any* backend then this would seem applicable. If instead it's around testing a 
backend that isn't legally encumbered (and is also realistically viable to use) 
then we are in a different area altogether...

Just my 2cents.

Sent from my really tiny device...

 On Aug 9, 2014, at 10:53 AM, Eoghan Glynn egl...@redhat.com wrote:
 
 
 
 +2 from me,
 
 More mongodb adoption (as stated) when it's questionable legally doesn't seem
 like a good long term strategy (I know it will/does impact yahoo adopting or
 using ceilometer...). Is this another one of those tactical changes that we
 keep on making that ends up being yet another piece of technical debt that
 someone will have to cleanup :-/
 
 If we thought a little more about this strategically maybe we would end up in
 a better place short term *and* long term??
 
 Hi Joshua,
 
 Since we currently do support mongodb as an *optional* storage driver,
 and some distros do recommend its usage, then surely we should test this
 driver fully in the upstream gate to support those users who take that
 option?
 
 (i.e. those users who accept MongoDB Inc's assurances[1] in regard to
 licensing of the client-side driver)
 
 Cheers,
 Eohgan
 
 [1] http://www.mongodb.org/about/licensing/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-09 Thread Joshua Harlow
+1 to what john said,

I would also like to wait and see on this... I'd rather be honest with 
ourselves, and to contributors new and old, and admit core reviewers are air 
traffic controllers (directing the project vs reviewing changes, two very 
different things IMHO) and reflecting that in what we say, what we do and how 
the community operates... (the long term effects of such a change may be hard 
to predict IMHO).

But I'll hold more of my opinion until I see more details on this...

Sent from my really tiny device...

On Aug 9, 2014, at 11:41 AM, John Griffith 
john.griff...@solidfire.commailto:john.griff...@solidfire.com wrote:




On Fri, Aug 8, 2014 at 1:31 AM, Nikola Đipanov 
ndipa...@redhat.commailto:ndipa...@redhat.com wrote:
On 08/08/2014 12:12 AM, Stefano Maffulli wrote:
 On 08/07/2014 01:41 PM, Eoghan Glynn wrote:
 My point was simply that we don't have direct control over the
 contributors' activities

 This is not correct and I've seen it repeated too often to let it go
 uncorrected: we (the OpenStack project as a whole) have a lot of control
 over contributors to OpenStack. There is a Technical Committee and a
 Board of Directors, corporate members and sponsors... all of these can
 do a lot to make things happen. For example, the Platinum members of the
 Foundation are required at the moment to have at least 'two full time
 equivalents' and I don't see why the board couldn't change that
 requirement, make it more specific.


Even if this were true (I don't know if it is or not), I have a hard
time imagining that any such attempt would be effective enough to solve
the current problems.

I think that OSS software wins in places it does mostly because it *does
not* get managed like a corporate software project. Trying to fit any
classical PM methodology on top of a (very active mind you) OSS project
will likely fail IMHO, due to not only lack of control over contributors
time, but widely different incentives of participating parties.
​+1

I see this as a step in the wrong direction for sure.  I also don't know about 
the slots approach that's being discussed.  Seems a better way to address 
some of this is content criteria sort of like what's been discussed on and 
off in this thread.  So an LTS model of sorts, like saying these are the types 
of changes for this release.​  Maybe that's buried in some of that proposals 
here not sure.

We're starting to do some content based rules in Cinder based with respect to 
milestones, for example we tried to say hey, if you want to add a new 3'rd 
party driver for Cinder, you need to do it by the second milestone so the 
remainder of the release can be focused on the core.  This didn't work out as 
well as we'd hoped but I think we can get better (and there's been some 
suggestion of moving it further up like first milestone).  We've also tried 
things like cleanup submissions, so things like additions of hacking rules 
etc have specific windows.  One of the reasons for this is just simply to keep 
from unintentionally forcing everything behind these changes in the queue to 
fail and need a rebase.

I don't know that what we've tried so far is really the right approach, but it 
was a descent learning experience and I think we can take something away from 
it and try a new approach with what we've learned (this will be a topic for 
Paris for sure).

Anyway, the slots and runway approach is a bit off putting for me; but I don't 
want to form any opinions until I read more about what Michael has to say based 
on the Nova meeting.  Also want to make sure I fully understand the concepts 
(which I might not currently).

The only thing I would say is that moving further and further to models that 
limit slots or whatever might have an unintended consequence of pushing away 
new contributors that maybe aren't being driven by their employer or tactical 
motivations.  Kinda be a bummer to put up walls like that I think.  It's pretty 
hard to submit changes already, we've got a lot of process and spend a lot of 
time on consensus building, I'd hate to make those things to take up even more 
of our time.  Process is good IMO but it should be implemented to save time, 
not require more of it.


N.

 OpenStack is not an amateurish project done by volunteers in their free
 time.  We have lots of leverage we can apply to get things done.

 /stef

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-10 Thread Joshua Harlow

One question from me:

Will there be later fixes to remove oslo.config dependency/usage from 
oslo.concurrency?


I still don't understand how oslo.concurrency can be used as a library 
with the configuration being set in a static manner via oslo.config 
(let's use the example of `lock_path` @ 
https://github.com/YorikSar/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For example:


Library X inside application Z uses lockutils (via the nice 
oslo.concurrency library) and sets the configuration `lock_path` to its 
desired settings, then library Y (also a user of oslo.concurrency) 
inside same application Z sets the configuration for `lock_path` to its 
desired settings. Now both have some unknown set of configuration they 
have set and when library X (or Y) continues to use lockutils they will 
be using some mix of configuration (likely some mish mash of settings 
set by X and Y); perhaps to a `lock_path` that neither actually wants 
to be able to write to...


This doesn't seem like it will end well; and will just cause headaches 
during debug sessions, testing, integration and more...


The same question can be asked about the `set_defaults()` function, how 
is library Y or X expected to use this (are they?)??


I hope one of the later changes is to remove/fix this??

Thoughts?

-Josh

On 08/07/2014 01:58 PM, Yuriy Taraday wrote:
 Hello, oslo cores.
 
 I've finished polishing up oslo.concurrency repo at [0] - please 
take a
 look at it. I used my new version of graduate.sh [1] to generate it, 
so

 history looks a bit different from what you might be used to.
 
 I've made as little changes as possible, so there're still some 
steps left

 that should be done after new repo is created:
 - fix PEP8 errors H405 and E126;
 - use strutils from oslo.utils;
 - remove eventlet dependency (along with random sleeps), but proper 
testing

 with eventlet should remain;
 - fix for bug [2] should be applied from [3] (although it needs some
 improvements);
 - oh, there's really no limit for this...
 
 I'll finalize and publish relevant change request to 
openstack-infra/config

 soon.
 
 Looking forward to any feedback!
 
 [0] https://github.com/YorikSar/oslo.concurrency

 [1] https://review.openstack.org/109779
 [2] https://bugs.launchpad.net/oslo/+bug/1327946
 [3] https://review.openstack.org/108954
 
 
 
 ___

 OpenStack-dev mailing list
 OpenStack-dev at lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Joshua Harlow



On Mon, Aug 11, 2014 at 11:39 AM, Ben Nemec openst...@nemebean.com 
wrote:

On 08/11/2014 01:02 PM, Yuriy Taraday wrote:
 On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow 
harlo...@outlook.com wrote:
 

 One question from me:

 Will there be later fixes to remove oslo.config dependency/usage 
from

 oslo.concurrency?

 I still don't understand how oslo.concurrency can be used as a 
library
 with the configuration being set in a static manner via 
oslo.config (let's

 use the example of `lock_path` @ https://github.com/YorikSar/
 oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For

 example:

 Library X inside application Z uses lockutils (via the nice
 oslo.concurrency library) and sets the configuration `lock_path` 
to its
 desired settings, then library Y (also a user of oslo.concurrency) 
inside
 same application Z sets the configuration for `lock_path` to its 
desired
 settings. Now both have some unknown set of configuration they 
have set and
 when library X (or Y) continues to use lockutils they will be 
using some
 mix of configuration (likely some mish mash of settings set by X 
and Y);
 perhaps to a `lock_path` that neither actually wants to be able to 
write

 to...

 This doesn't seem like it will end well; and will just cause 
headaches

 during debug sessions, testing, integration and more...

 The same question can be asked about the `set_defaults()` 
function, how is

 library Y or X expected to use this (are they?)??

 I hope one of the later changes is to remove/fix this??

 Thoughts?

 -Josh
 
 
 I'd be happy to remove lock_path config variable altogether. It's 
basically

 never used. There are two basic branches in code wrt lock_path:
 - when you provide lock_path argument to lock (and derivative 
functions),

 file-based lock is used and CONF.lock_path is ignored;
 - when you don't provide lock_path in arguments, semaphore-based 
lock is
 used and CONF.lock_path is just a prefix for its name (before 
hashing).
 
 I wonder if users even set lock_path in their configs as it has 
almost no

 effect. So I'm all for removing it, but...
 From what I understand, every major change in lockutils drags along 
a lot
 of headache for everybody (and risk of bugs that would be 
discovered very
 late). So is such change really worth it? And if so, it will 
require very

 thorough research of lockutils usage patterns.


Two things lock_path has to stay for: Windows and consumers who 
require
file-based locking semantics.  Neither of those use cases are trivial 
to

remove, so IMHO it would not be appropriate to do it as part of the
graduation.  If we were going to alter the API that much it needed to
happen in incubator.


As far as lock_path mismatches, that shouldn't be a problem unless a
consumer is doing something very unwise.  Oslo libs get their
configuration from the application using them, so unless the 
application

passes two separate conf objects to library X and Y they're both going
to get consistent settings.  If someone _is_ doing that, then I think
it's their responsibility to make sure the options in both config 
files

are compatible with each other.


Why would it be assumed they would pass the same settings (how is that 
even possible to know ahead of time? especially if library X pulls in a 
new library ZZ that requires a new configuration setting). For example, 
one directory for `lock_path` may be reasonable for tooz and another 
may be reasonable for taskflow (completly depends on there intended 
usage), it would not likely desireable to have them go to the same 
location. Forcing application Z to know the inner workings of library X 
and library Y (or future unknown library ZZ) is just pushing the 
problem onto the library user, which seems inappropriate and breaks the 
whole point of having abstractions  APIs in the first place... This 
IMHO is part of the problem with having statically set *action at a 
distance* type of configuration, the libraries themselves are not in 
control of their own configuration, which breaks abstractions  APIs 
left and right. If some application Y can go under a library and pull 
the rug out from under it, how is that a reasonable thing to expect the 
library to be able to predict  handle?


This kind of requirement has always made me wonder how other libraries 
(like tooz, or taskflow) actually interact with any of the oslo.* 
libraries in any predicatable way (since those library could be 
interacting with oslo.* libraries that have configuration that can be 
switched out from underneath them, making those libraries have *secret* 
APIs that appear and disappear depending on what used oslo.* library 
was newly added as a dependency and what newly added configuration that 
library sucked in/exposed...).


-Josh




-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Joshua Harlow



On Mon, Aug 11, 2014 at 11:02 AM, Yuriy Taraday yorik@gmail.com 
wrote:
On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow harlo...@outlook.com 
wrote:

One question from me:

Will there be later fixes to remove oslo.config dependency/usage 
from oslo.concurrency?


I still don't understand how oslo.concurrency can be used as a 
library with the configuration being set in a static manner via 
oslo.config (let's use the example of `lock_path` @ 
https://github.com/YorikSar/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For example:


Library X inside application Z uses lockutils (via the nice 
oslo.concurrency library) and sets the configuration `lock_path` to 
its desired settings, then library Y (also a user of 
oslo.concurrency) inside same application Z sets the configuration 
for `lock_path` to its desired settings. Now both have some unknown 
set of configuration they have set and when library X (or Y) 
continues to use lockutils they will be using some mix of 
configuration (likely some mish mash of settings set by X and Y); 
perhaps to a `lock_path` that neither actually wants to be able to 
write to...


This doesn't seem like it will end well; and will just cause 
headaches during debug sessions, testing, integration and more...


The same question can be asked about the `set_defaults()` function, 
how is library Y or X expected to use this (are they?)??


I hope one of the later changes is to remove/fix this??

Thoughts?

-Josh


I'd be happy to remove lock_path config variable altogether. It's 
basically never used. There are two basic branches in code wrt 
lock_path:
- when you provide lock_path argument to lock (and derivative 
functions), file-based lock is used and CONF.lock_path is ignored; 
- when you don't provide lock_path in arguments, semaphore-based lock 
is used and CONF.lock_path is just a prefix for its name (before 
hashing).


Agreed, it just seems confusing (and bad) to have parts of the API come 
in from `CONF.lock_path` (or other `CONF.*` options) and other parts of 
the API come in via function parameters. This just makes understanding 
the API and knowing how to interact with it that much harder (after all 
what is the right way of using XYZ feature when it can be changed via a 
out-of-band *hidden* API call via configuration adjustments under the 
covers?)... This makes it really hard to use oslo.concurrency in 
taskflow (and likely other libraries that would like to consume 
oslo.concurrency, seeing that it will be on pypi, I would expect this 
number to grow...) since taskflow would really appreciate and desire to 
have stable APIs that don't change by some configuration that can be 
set by some party via some out-of-band method (for example some other 
library or program calling `set_defaults()`). This kind of way of using 
an API (half of the settings from config, half of the settings from the 
functions API...) may be ok for applications but it's not IMHO ok for 
libraries (or clients) that want to use oslo.concurrency. 

Hopefully it can be fixed some that it works via both ways? Oslo.db I 
believe made this work better by allowing for configuration to come in 
via a configuration object that can be provided by the user of oslo.db, 
this makes the API that oslo.db exposes strongly tied to the attributes 
 documentation of that object. I still don't think thats perfect 
either since its likely that the documentation for what that objects 
attributes should be is not as update to date or easy to change as 
updating function/method documentation...


I wonder if users even set lock_path in their configs as it has 
almost no effect. So I'm all for removing it, but...
From what I understand, every major change in lockutils drags along a 
lot of headache for everybody (and risk of bugs that would be 
discovered very late). So is such change really worth it? And if so, 
it will require very thorough research of lockutils usage patterns.


Sounds like tech debt to me, it always requires work to make something 
better. Are we the type of community that will avoid changing things 
(for the better) because we fear introducing new bugs that may be found 
along the way? I for one hope that we are not that type of community 
(that type of community will die due to its own *fake* fears...).




--

Kind regards, Yuriy.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Joshua Harlow



On Mon, Aug 11, 2014 at 12:47 PM, Doug Hellmann d...@doughellmann.com 
wrote:


On Aug 11, 2014, at 3:26 PM, Joshua Harlow harlo...@outlook.com 
wrote:


 
 
 On Mon, Aug 11, 2014 at 11:02 AM, Yuriy Taraday 
yorik@gmail.com wrote:
 On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow 
harlo...@outlook.com wrote:

 One question from me:
 Will there be later fixes to remove oslo.config dependency/usage 
from oslo.concurrency?
 I still don't understand how oslo.concurrency can be used as a 
library with the configuration being set in a static manner via 
oslo.config (let's use the example of `lock_path` @ 
https://github.com/YorikSar/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For example:
 Library X inside application Z uses lockutils (via the nice 
oslo.concurrency library) and sets the configuration `lock_path` 
to its desired settings, then library Y (also a user of 
oslo.concurrency) inside same application Z sets the configuration 
for `lock_path` to its desired settings. Now both have some 
unknown set of configuration they have set and when library X (or 
Y) continues to use lockutils they will be using some mix of 
configuration (likely some mish mash of settings set by X and Y); 
perhaps to a `lock_path` that neither actually wants to be able to 
write to...
 This doesn't seem like it will end well; and will just cause 
headaches during debug sessions, testing, integration and more...
 The same question can be asked about the `set_defaults()` 
function, how is library Y or X expected to use this (are they?)??

 I hope one of the later changes is to remove/fix this??
 Thoughts?
 -Josh
 I'd be happy to remove lock_path config variable altogether. It's 
basically never used. There are two basic branches in code wrt 
lock_path:
 - when you provide lock_path argument to lock (and derivative 
functions), file-based lock is used and CONF.lock_path is ignored; 
- when you don't provide lock_path in arguments, semaphore-based 
lock is used and CONF.lock_path is just a prefix for its name 
(before hashing).
 
 Agreed, it just seems confusing (and bad) to have parts of the API 
come in from `CONF.lock_path` (or other `CONF.*` options) and other 
parts of the API come in via function parameters. This just makes 
understanding the API and knowing how to interact with it that much 
harder (after all what is the right way of using XYZ feature when it 
can be changed via a out-of-band *hidden* API call via configuration 
adjustments under the covers?)... This makes it really hard to use 
oslo.concurrency in taskflow (and likely other libraries that would 
like to consume oslo.concurrency, seeing that it will be on pypi, I 
would 


The libraries placed in the oslo namespace are very much NOT meant to 
be used by anything other than OpenStack. They are intended to be the 
glue layer between OpenStack and some other implementation libraries.


oslo.concurrency wraps pylockfile and the plan is to move the actual 
lock code into pylockfile without the oslo.config dependency. That 
will make pylockfile reusable by taskflow and tooz, and the locking 
stuff in oslo.concurrency a smaller API with consistent configuration 
for use by applications.


Sounds great, I've been wondering why 
https://github.com/stackforge/tooz/commit/f3e11e40f9871f8328 
happened/merged (maybe it should be changed?). I see that 
https://review.openstack.org/#/c/102202/ merged so that's good news and 
hopefully makes the underlying lockutils functionality more useful to 
outside of openstack users in the near-term future (which includes 
taskflow, being that it is being used in  outside openstack by various 
entities).





 expect this number to grow...) since taskflow would really 
appreciate and desire to have stable APIs that don't change by some 
configuration that can be set by some party via some out-of-band 
method (for example some other library or program calling 
`set_defaults()`). This kind of way of using an API (half of the 
settings from config, half of the settings from the functions 
API...) may be ok for applications but it's not IMHO ok for 
libraries (or clients) that want to use oslo.concurrency. 
 Hopefully it can be fixed some that it works via both ways? Oslo.db 
I believe made this work better by allowing for configuration to 
come in via a configuration object that can be provided by the user 
of oslo.db, this makes the API that oslo.db exposes strongly tied to 
the attributes  documentation of that object. I still don't think 
thats perfect either since its likely that the documentation for 
what that objects attributes should be is not as update to date or 
easy to change as updating function/method documentation…


That technique of having the configuration object passed to the oslo 
library will be repeated in the other new libraries we are creating 
if they already depend on configuration settings of some sort. The 
configuration options are not part of the public API of the library, 
so

Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-12 Thread Joshua Harlow
Sure, thats great as to why a feature like it might exist (and I think 
such a feature is great to have, for cases when a bigger *distributed* 
system isn't desired).


Just the one there taken from lockutils has some issues that IMHO I 
think tooz would be better to avoid for the time-being (until 
pylockfile is updated to have a more reliable implementation). Some of 
the current issues I can think of off the top of my head:


1. https://bugs.launchpad.net/oslo/+bug/1327946 (this means the usage 
in tooz will be similarily not resistant to program termination, which 
in a library like tooz seems more severe, since tooz has no way of 
knowing how it, as a library, will be used). With this bug future 
acquisition after *forceful* program termination will result in the 
acquire() method not working for the same IPClock name (ever).


2. The double API, tooz configures lockutils one way, someone else can 
go under tooz and use `set_defaults()` (or other ways that I'm not 
aware of that can be used to configure oslo.config) and expect that the 
settings they will have set will actually do something, when in fact 
they will not (or will they???). This seems like a bad point of 
confusion for an API to have, where some of its API is from 
methods/functions... and some is from oslo.config...


3. Bringing in oslo.config as a dependency (tooz isn't configured via 
oslo.config but now it has code that looks like it is configured this 
via it). What happens if the parts of tooz now are set by oslo.config 
and some parts aren't? This seems bad from a user experience (where the 
user is the library user) point of view and a testability point of view 
(and probably other points of view that I can't think of), when there 
are new options that can be set via a *secret* API that now affect how 
tooz works...


4. What happens with windows here (since tooz is a library it's hard to 
predict how it will be used, unless windows is not supported)? Windows 
will resort back to using a filelock, which will default to using 
whatever oslo.config file path was set for tooz, which again goes back 
to #2 and #3 and having 2 apis, one public and one *secret* that can 
affect how tooz operates... In this case it seems 
`default=os.environ.get(TOOZ_LOCK_PATH)` will be used, when that's 
not set tooz now blows up with a weird configuration error @ 
https://github.com/stackforge/tooz/blob/master/tooz/openstack/common/lockutils.py#L222 
(this all seems bad for users of tooz)...


What do u think about waiting until pylockfile is ready and avoiding 
1-4 from above? At least if taskflow uses tooz I surely don't want 
taskflow to have to deal with #1-4 (which it will inherit from tooz if 
taskflow starts to use tooz by the very nature of taskflow using tooz 
as a library).


Thoughts?

-Josh

On Tue, Aug 12, 2014 at 6:12 AM, Julien Danjou jul...@danjou.info 
wrote:

On Mon, Aug 11 2014, Joshua Harlow wrote:


 Sounds great, I've been wondering why
 https://github.com/stackforge/tooz/commit/f3e11e40f9871f8328 
happened/merged

 (maybe it should be changed?).


For the simple reason that there's people wanting to use a lock
distributed against several processes without being distributed 
against
several nodes. In that case, having a ZK or memcached backend is 
useless

as IPC is good enough.

--
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Joshua Harlow

Do u know if ceilometer is using six.wraps?

If so, that helper adds in the `__wrapped__` attribute to decorated 
methods (which can be used to find the original decorated function).


If just plain functools are used (and python3.x isn't used) then it 
will be pretty hard afaik to find the original decorated function (if 
that's the desire).


six.wraps() is new in six 1.7.x so it might not be used in ceilometer 
yet (although maybe it should start to be used?).


-Josh

On Tue, Aug 12, 2014 at 9:08 AM, Pendergrass, Eric 
eric.pendergr...@hp.com wrote:
Hi, I’m trying to use the built in secure decorator in Pecan for 
access control, and I’ld like to get the name of the method that is 
wrapped from within the decorator.
 
For instance, if I’m wrapping MetersController.get_all with an 
@secure decorator, is there a way for the decorator code to know it 
was called by MetersController.get_all?
 
I don’t see any global objects that provide this information.  I 
can get the endpoint, v2/meters, with pecan.request.path, but 
that’s not as elegant.
 
Is there a way to derive the caller or otherwise pass this 
information to the decorator?
 
Thanks

Eric Pendergrass



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Joshua Harlow

A big +1 to what daniel said,

If f2f events are becoming so important  the only way to get things 
done, IMHO we should really start to do some reflection on how our 
community operates and start thinking about what we are doing wrong. 
Expecting every company to send developers (core or non-core) to all 
these events is unrealistic (and IMHO is the wrong path our community 
should go down). If only cores go (they can probably convince their 
employers they should/need to), these f2f events become something akin 
to secret f2f meetings where decisions are made behind some set of 
closed-doors (maybe cores should then be renamed the 'secret society of 
core reviewers', maybe even giving them a illuminati like logo, haha), 
that doesn't seem very open to me (and as daniel said further 
stratifies the people who work on openstack...).


Going the whole virtual route does seem better (although it still feels 
like something is wrong with how we are operating if that's even 
needed).


-Josh

On Wed, Aug 13, 2014 at 2:57 AM, Daniel P. Berrange 
berra...@redhat.com wrote:

On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:

 Hi.
 
 One of the action items from the nova midcycle was that I was asked 
to
 make nova's expectations of core reviews more clear. This email is 
an

 attempt at that.
 
 Nova expects a minimum level of sustained code reviews from cores. 
In

 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.
 
 Additionally, there is increasing levels of concern that cores need 
to
 be on the same page about the criteria we hold code to, as well as 
the
 overall direction of nova. While the weekly meetings help here, it 
was

 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same 
direction

 and trust each other.
 
 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My 
stance

 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the 
dates

 for these events announced further in advance.


Personally I'm going to find it really hard to justify long distance
travel 4 times a year for OpenStack for personal / family reasons,
let alone company cost. I couldn't attend Icehouse mid-cycle because
I just had too much travel in a short time to be able to do another
week long trip away from family. I couldn't attend Juno mid-cycle
because it clashed we personal holiday. There are other opensource
related conferences that I also have to attend (LinuxCon, FOSDEM,
KVM Forum, etc), etc so doubling the expected number of openstack
conferences from 2 to 4 is really very undesirable from my POV.
I might be able to attend the occassional mid-cycle meetup if the
location was convenient, but in general I don't see myself being
able to attend them regularly.

I tend to view the fact that we're emphasising the need of in-person
meetups to be somewhat of an indication of failure of our community
operation. The majority of open source projects work very effectively
with far less face-to-face time. OpenStack is fortunate that companies
are currently willing to spend 6/7-figure sums flying 1000's of
developers around the world many times a year, but I don't see that
lasting forever so I'm concerned about baking the idea of f2f midcycle
meetups into our way of life even more strongly.


 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.


Travel funding is certainly an issue, but I'm not sure that Foundation
funding would be a solution, because the impact probably isn't 
directly

on the core devs. Speaking with my Red Hat on, if the midcycle meetup
is important enough, the core devs will likely get the funding to 
attend.
The fallout of this though is that every attendee at a mid-cycle 
summit
means fewer attendees at the next design summit. So the impact of 
having

more core devs at mid-cycle is that we'll get fewer non-core devs at
the design summit. This sucks big time for the non-core devs who want
to engage with our community.

Also having each team do a f2f mid-cycle meetup at a different 
location
makes it even harder for people who have a genuine desire / need to 
take
part in multiple teams. Going to multiple mid-cycle meetups is even 
more
difficult to justify so they're having to make difficult decisions 
about

which to go to :-(

I'm also not a fan of mid-cycle meetups because I feel 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Joshua Harlow



On Wed, Aug 13, 2014 at 5:37 AM, Mark McLoughlin mar...@redhat.com 
wrote:

On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote:
 On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor 
mord...@inaugust.com wrote:



  Yes.
 
  Additionally, and I think we've been getting better at this in 
the 2 cycles
  that we've had an all-elected TC, I think we need to learn how to 
say no on
  technical merit - and we need to learn how to say thank you for 
your
  effort, but this isn't working out Breaking up with someone is 
hard to do,

  but sometimes it's best for everyone involved.
 
 
 I agree.
 
 The challenge is scaling the technical assessment of projects. We're

 all busy, and digging deeply enough into a new project to make an
 accurate assessment of it is time consuming. Some times, there are
 impartial subject-matter experts who can spot problems very quickly,
 but how do we actually gauge fitness?


Yes, it's important the TC does this and it's obvious we need to get a
lot better at it.

The Marconi architecture threads are an example of us trying harder 
(and

kudos to you for taking the time), but it's a little disappointing how
it has turned out. On the one hand there's what seems like a this
doesn't make any sense gut feeling and on the other hand an earnest,
but hardly bite-sized justification for how the API was chosen and how
it lead to the architecture. Frustrating that appears to not be
resulting in either improved shared understanding, or improved
architecture. Yet everyone is trying really hard.


 Letting the industry field-test a project and feed their experience
 back into the community is a slow process, but that is the best
 measure of a project's success. I seem to recall this being an
 implicit expectation a few years ago, but haven't seen it discussed 
in

 a while.


I think I recall us discussing a must have feedback that it's
successfully deployed requirement in the last cycle, but we 
recognized

that deployers often wait until a project is integrated.


 I'm not suggesting we make a policy of it, but if, after a
 few cycles, a project is still not meeting the needs of users, I 
think
 that's a very good reason to free up the hold on that role within 
the

 stack so other projects can try and fill it (assuming that is even a
 role we would want filled).


I'm certainly not against discussing de-integration proposals. But I
could imagine a case for de-integrating every single one of our
integrated projects. None of our software is perfect. How do we make
sure we approach this sanely, rather than run the risk of someone
starting a witch hunt because of a particular pet peeve?

I could imagine a really useful dashboard showing the current state of
projects along a bunch of different lines - summary of latest
deployments data from the user survey, links to known scalability
issues, limitations that operators should take into account, some
capturing of trends so we know whether things are improving. All of 
this

data would be useful to the TC, but also hugely useful to operators.


+1

This seems to be the only way to determine when a project isn't working 
out for the users in the community.


With such unbiased data being available, it would make a great case for 
why de-integration could happen. It would then allow the project to go 
back and fix itself, or allow for a replacement to be created that 
doesn't have the same set of limitations/problems. This would seem like 
a way that let's the project that works for users best to eventually be 
selected (survival of the fittest); although we also have to be 
careful, software isn't static and instead can be reshaped and molded 
and we should give the project that has issues a chance to reshape 
itself (giving the benefit of the doubt vs not).





Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Joshua Harlow
Comment inline.

On Aug 22, 2014, at 10:13 AM, Michael Chapman wop...@gmail.com wrote:

 
 
 
 On Fri, Aug 22, 2014 at 9:51 PM, Sean Dague s...@dague.net wrote:
 On 08/22/2014 01:30 AM, Michael Chapman wrote:
 
 
 
  On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
 
  On 08/19/2014 11:28 PM, Robert Collins wrote:
 
  On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
  ...
 
  I'd like to see more unification of implementations in
  TripleO - but I
  still believe our basic principle of using OpenStack
  technologies that
  already exist in preference to third party ones is still
  sound, and
  offers substantial dogfood and virtuous circle benefits.
 
 
 
  No doubt Triple-O serves a valuable dogfood and virtuous
  cycle purpose.
  However, I would move that the Deployment Program should
  welcome the many
  projects currently in the stackforge/ code namespace that do
  deployment of
  OpenStack using traditional configuration management tools
  like Chef,
  Puppet, and Ansible. It cannot be argued that these
  configuration management
  systems are the de-facto way that OpenStack is deployed
  outside of HP, and
  they belong in the Deployment Program, IMO.
 
 
  I think you mean it 'can be argued'... ;).
 
 
  No, I definitely mean cannot be argued :) HP is the only company I
  know of that is deploying OpenStack using Triple-O. The vast
  majority of deployers I know of are deploying OpenStack using
  configuration management platforms and various systems or glue code
  for baremetal provisioning.
 
  Note that I am not saying that Triple-O is bad in any way! I'm only
  saying that it does not represent the way that the majority of
  real-world deployments are done.
 
 
   And I'd be happy if folk in
 
  those communities want to join in the deployment program and
  have code
  repositories in openstack/. To date, none have asked.
 
 
  My point in this thread has been and continues to be that by having
  the TC bless a certain project as The OpenStack Way of X, that we
  implicitly are saying to other valid alternatives Sorry, no need to
  apply here..
 
 
  As a TC member, I would welcome someone from the Chef
  community proposing
  the Chef cookbooks for inclusion in the Deployment program,
  to live under
  the openstack/ code namespace. Same for the Puppet modules.
 
 
  While you may personally welcome the Chef community to propose
  joining the deployment Program and living under the openstack/ code
  namespace, I'm just saying that the impression our governance model
  and policies create is one of exclusion, not inclusion. Hope that
  clarifies better what I've been getting at.
 
 
 
  (As one of the core reviewers for the Puppet modules)
 
  Without a standardised package build process it's quite difficult to
  test trunk Puppet modules vs trunk official projects. This means we cut
  release branches some time after the projects themselves to give people
  a chance to test. Until this changes and the modules can be released
  with the same cadence as the integrated release I believe they should
  remain on Stackforge.
 
  In addition and perhaps as a consequence, there isn't any public
  integration testing at this time for the modules, although I know some
  parties have developed and maintain their own.
 
  The Chef modules may be in a different state, but it's hard for me to
  recommend the Puppet modules become part of an official program at this
  stage.
 
 Is the focus of the Puppet modules only stable releases with packages?
 
 
 We try to target puppet module master at upstream OpenStack master, but 
 without CI/CD we fall behind. The missing piece is building packages and 
 creating a local repo before doing the puppet run, which I'm working on 
 slowly as I want a single system for both deb and rpm that doesn't make my 
 eyes bleed. fpm and pleaserun are the two key tools here.
  
 Puppet + git based deploys would be honestly a really handy thing
 (especially as lots of people end up having custom fixes for their
 site). The lack of CM tools for git based deploys is I think one of the
 reasons we seen people using DevStack as a generic installer.
 
 
 It's possible but it's also straight up a poor thing to do in my opinion. If 
 you're going to install nova from source, maybe you also want libvirt from 
 source to test a new feature, then you want some of libvirt's deps and so on. 
 Puppet isn't equipped to deal with this effectively. It runs 

Re: [openstack-dev] What does NASA not using OpenStack mean to OS's future

2014-08-25 Thread Joshua Harlow
So to see if we can get something useful from this thread.

What was your internal analysis, can it be published? Even negative analysis is 
useful to make openstack better...

It'd be nice to have some details on what you found, what u didn't find, so 
that we can all improve...

After all that is what it's all about.

-Josh

On Aug 25, 2014, at 11:13 AM, Aryeh Friedman aryeh.fried...@gmail.com wrote:

 If I was doing that then I would be promoting the platform by name (which I 
 am not). I was just pointing out in our own internal ananylis OS came in 
 dead last among all the open source IaaS/PaaS's (the current version of mine 
 is not #1 btw)
 
 
 On Mon, Aug 25, 2014 at 2:03 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 On 25 August 2014 10:34, Aryeh Friedman aryeh.fried...@gmail.com wrote:
 Do you call Martin Meckos having no clue... he is the one that leveled the 
 second worse criticism after mine... or is Euclapytus not one the founding 
 members of OpenStack (after all many of the glance commands still use it's 
 name)
 
 You appear to be trolling, and throwing around amazingly easy-to-disprove 
 'factoids', in an inappropriate forum, in order to drum up support for your 
 own competing open source cloud platform.  Please stop.
 
 Your time would be much better spent improving your platform rather than 
 coming up with frankly bizarre criticism of the competitors.
 -- 
 Ian.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.0.0: A console interface to Gerrit

2014-09-04 Thread Joshua Harlow
Congrats to making it to 1.0!

May there be many more :)

Sent from my really tiny device...

 On Sep 4, 2014, at 4:18 PM, James E. Blair cor...@inaugust.com wrote:
 
 Announcing Gertty 1.0.0
 
 Gertty is a console-based interface to the Gerrit Code Review system.
 
 If that doesn't sound interesting to you, then just skip right on to
 the next message.  This mailing list gets a lot of traffic, and it's
 going to take you a while to read it all in that web browser you're
 using.
 
 Gertty was written by and for coremudgeons.  But it's not just because
 we think mutt is the apex of user interface design.
 
 We write code in a terminal.  We read logs in a terminal.  We debug code
 in a terminal.  We commit in a terminal.  You know what's next.
 
 This is why I wrote Gertty:
 
 * Workflow -- the interface is designed to support a workflow similar
   to reading network news or mail.  In particular, it is designed to
   deal with a large number of review requests across a large number
   of projects.
 
 * Offline Use -- Gertty syncs information about changes in subscribed
   projects to a local database and local git repos.  All review
   operations are performed against that database and then synced back
   to Gerrit.
 
 * Speed -- user actions modify locally cached content and need not
   wait for server interaction.
 
 * Convenience -- because Gertty downloads all changes to local git
   repos, a single command instructs it to checkout a change into that
   repo for detailed examination or testing of larger changes.
 
 * Information Architecture -- in a console environment, Gertty can
   display information to reviewers in a more compact and relevant
   way.
 
 * Colors -- I think ANSI escape sequences are a neat idea.
 
 Here are some reasons you may want to use Gertty:
 
 * Single page diff -- when you look at a diff, all of the files are
   displayed on the same screen making it easier to see the full
   context of a change as you scroll effortlessly around the files
   that comprise it.  This may be the most requested feature in
   Gerrit.  It was harder to make Gertty show only only one file than
   it was to do all of them so that's what we have.  You still get the
   choice of side-by-side or unified diff, color coding, inline
   comments, and intra-line diffs.
 
 * The checkout and cherry-pick commands -- Gertty works directly on
   your local git repos, even the same ones you hack on.  It doesn't
   change them unless you ask it to, so normally you don't notice it's
   there, but with a simple command you can tell Gertty to check out a
   change into your working tree, or cherry-pick a bunch of changes
   onto a branch to build up a new patch series.  It's like git
   review -d if you've ever used it, but instead of typing git
   review -d what-was-that-change-number-again? you type c.
 
 * Your home address is seat 7A (or especially if it's 1A) -- Gertty
   works seamlessly online or offline so you can review changes while
   you're flying to your 15th mid-cycle meetup.  Gertty syncs all of
   the open changes for subscribed projects to a local database and
   performs all of its operations there.  When it's able to connect to
   Gerrit, it uploads your reviews instantly.  When it's unable, they
   are queued for the next time you are online.  It handles the
   transition between online and offline effortlessly.  If your
   Internet connection is slow or unreliable, Gertty helps with that
   too.
 
 * You review a lot of changes -- Gertty is fast.  All of the typical
   review operations are performed against the local database or the
   local git repos.  Gertty can review changes as fast as you can.  It
   has commands to instantly navigate from change to change, and
   shortcuts to leave votes on a change with a single keypress.
 
 * You are particular about the changes you review -- Gertty lets you
   subscribe to projects, and then displays each of those projects
   along with the number of open changes and changes you have not
   reviewed.  Open up those projects like you would a newsgroup or
   email folder, and scroll down the list of changes.  If you don't
   have anything to say about a change but want to see it again the
   next time it's updated, just hit a key to mark it reviewed.  If you
   don't want to see a change ever again, hit a different key to kill
   it.  Gertty helps you review all of the changes you want to review,
   and none of the changes you don't.
 
 * Radical customization -- The queries that Gertty uses by default
   can be customized.  It uses the same search syntax as Gerrit and
   support most of its operators.  It has user-defined dashboards that
   can be bound to any key.  In fact, any command can be bound to any
   key.  The color palette can be customized.  You spend a lot of time
   reviewing changes, you should be comfortable.
 
 * Your terminal is an actual terminal -- Gertty works just fine in 80
   columns, but it is also happy to spread out into 

Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-06 Thread Joshua Harlow
There was a review once that tried to move the evzookeeper to using kazoo, 
perhaps that can get reprioritzed??

https://review.openstack.org/#/c/28951/

Sent from my really tiny device...

 On Sep 5, 2014, at 6:36 AM, Sean Dague s...@dague.net wrote:
 
 While reviewing this zookeeper service group fix in Nova -
 https://review.openstack.org/#/c/102639/ it was exposed that the
 zookeeper tests aren't running in infra.
 
 The crux of the issue is that zookeeper python modules are C extensions.
 So you have to either install from packages (which we don't do in unit
 tests) or install from pip, which means forcing zookeeper dev packages
 locally. Realistically this is the same issue we end up with for mysql
 and pg, but given their wider usage we just forced that pain on developers.
 
 But it seems like a bad stand off between testing upstream and testing
 normal path locally.
 
 Big picture it would be nice to not require a ton of dev libraries
 locally for optional components, but still test them upstream. So that
 in the base case I'm not running zookeeper locally, but if it fails
 upstream because I broke something in zookeeper, it's easy enough to
 spin up that dev env that has it.
 
 Which feels like we need some decoupling on our requirements vs. tox
 targets to get there. CC to Monty and Clark as our super awesome tox
 hackers to help figure out if there is a path forward here that makes sense.
 
-Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Joshua Harlow
Amen to this!

I've always felt bad that before yahoo tries to include a new feature in its 
openstack cloud/s we have to figure out how much the feature is a land-mine, 
how much of it works, how much of it doesn't and so-on. That type of 
investigation imho shouldn't really be needed and the fact that it is makes me 
want more and more a stability cycle or two (or three).

More and more recently I've be thinking that we have spent way to much on 
drivers and features and not enough on our own 'infrastructure'. 

While of course there is a balance, it just seems like the balance currently 
isn't right (IMHO).

Maybe we should start asking ourselves why it is so much easier to add a driver 
vs. do a cross-project functionality like gantt (or centralized quota 
management or other...) that removes some of those land-mines. When it becomes 
easier to land gantt vs a new driver then I think we might be in a better 
place. After all, our infrastructure is what makes the project a long-term 
success and not adding new drivers.

Just my 2 cents,

Josh

On Sep 7, 2014, at 5:14 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/03/2014 08:37 AM, Joe Gordon wrote:
 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].
 
 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what
 they think the project wide Kilo cycle goals should be and post them on
 this thread by end of day Wednesday, September 10th. After which time we
 can begin discussing the results.
 The goal of this exercise is to help us see if our individual world views
 align with the greater community, and to get the ball rolling on a larger
 discussion of where as a project we should be focusing more time.
 
 If I were king ...
 
 1. Caring about end user experience at all
 
 It's pretty clear, if you want to do things with OpenStack that are not 
 running your own cloud, that we collectively have not valued the class of 
 user who is a person who wants to use the cloud. Examples of this are that 
 the other day I had to read a section of the admin guide to find information 
 about how to boot a nova instance with a cinder volume attached all in one 
 go. Spoiler alert, it doesn't work. Another spoiler alert - even though the 
 python client has an option for requesting that a volume that is to be 
 attached on boot be formatted in a particular way, this does not work for 
 cinder volumes, which means it does not work for an end user - EVEN THOUGH 
 this is a very basic thing to want.
 
 Our client libraries are clearly not written with end users in mind, and this 
 has been the case for quite some time. However, openstacksdk is not yet to 
 the point of being usable for end users - although good work is going on 
 there to get it to be a basis for an end user python library.
 
 We give deployers so much flexibility, that in order to write even a SIMPLE 
 program that uses OpenStack, an end user has to know generally four of five 
 pieces of information to check for that are different ways that a deployer 
 may have decided to do things.
 
 Example:
 
 - As a user, I want a compute instance that has an IP address that can do 
 things.
 
 WELL, now you're screwed, because there is no standard way to do that. You 
 may first want to try booting your instance and then checking to see if nova 
 returns a network labeled public. You may get no networks. This indicates 
 that your provider decided to deploy neutron, but as part of your account 
 creation did not create default networks. You now need to go create a router, 
 network and port in neutron. Now you can try again. Or, you may get networks 
 back, but neither of them are labeled public - instead, you may get a 
 public and a private address back in the network labeled private. Or, you may 
 only get a private network back. This indicates that you may be expected to 
 create a thing called a floating-ip. First, you need to verify that your 
 provider has installed the floating-ip's extension. If they have, then you 
 can create a floating-ip and attach it to your host. NOW - once you have 
 those things done, you need to connect to your host and verify that its 
 outbound networkin
 g has not been blocked by a thing called security groups, which you also may 
not have been expecting to exist, but I'll stop there, because the above is 
long enough.
 
 Every. Single. One. Of. Those. Cases. is real and has occurred across only 
 the two public openstack clouds that infra uses. That means that our 
 provisioning code takes every single one of them in to account, and anyone 
 who writes code that wants to get a machine to use must take them all in to 
 account or else their code is buggy.
 
 That's 

[openstack-dev] Standard virtualenv for docs building

2014-09-08 Thread Joshua Harlow
Hi all,

I just wanted to get some feedback on a change that I think will make the docs 
building process more understood,

Currently there is a script @ 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/slave_scripts/run-docs.sh
 (this is the github mirror fyi) that builds your docs when requested using an 
implicit virtualenv 'venv' with a single command `tox -e$venv -- python 
setup.py build_sphinx`. Over the weekend I was working on having the taskflow 
'docs' venv build a changelog and include it in the docs when I learned that 
the 'docs' virtualenv isn't actually what is called when docs are being built 
(and thus can't do customized things to include the changelog).

I wanted to get some some feedback on standardizing around the 'docs' 
virtualenv for docs building (it seems common to use this in most projects 
anyway) and depreciate or remove the implicitly used 'venv' + above command to 
build the docs and just have the infra setup call into the 'docs' virtualenv 
and have it build the docs as appropriate for the project.

This would mean that all projects would at least need the following in there 
tox.ini (if they don't already have it).

[docs]
commands = python setup.py build_sphinx

Does this seem reasonable to all?

There is also a change in the governance repo for this as well @ 
https://review.openstack.org/#/c/119875/

Thoughts, comments, other?

- Josh




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TaskFlow][Oslo] TaskFlow 0.4.0 Released!

2014-09-08 Thread Joshua Harlow
Howdy all,

Just wanted to send a mini-announcement about the new taskflow 0.4.0 release on 
behalf of the oslo and taskflow teams,

The details about what is new and what is changed and all that goodness can be 
found @ https://etherpad.openstack.org/p/TaskFlow-0.4

Updated developer docs can be found at 
http://docs.openstack.org/developer/taskflow as usual (state validation and 
enforcement were a big thing in this release IMHO).

Bugs of course can be reported @ 
http://bugs.launchpad.net/taskflow/0.4/+filebug (hopefully no bugs!).

The requirements repo is being updated (no expected issues there) @ 
https://review.openstack.org/119985

As always find the team in #openstack-state-management and come with questions, 
comments or other thoughts :-)

Onward and upward to the next release (hopefully a 0.4.1 release with some 
patch fixes, small adjustments soon...)!

- Josh

P.S.

Also wanted to give a shout-out to a recent blog about taskflow that one of the 
cores has been creating:

http://www.dankrause.net/2014/08/23/intro-to-taskflow.html (hopefully to turn 
into a series)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-16 Thread Joshua Harlow

And if u can also prove NP = P u get 1 million dollars[1, 2]

Let me know when u got the proof,

Thanks much,

[1] http://www.claymath.org/millenium-problems/p-vs-np-problem
[2] 
http://www.claymath.org/millennium-problems/millennium-prize-problems


-Josh

On Mon, Sep 15, 2014 at 5:07 PM, Jeremy Stanley fu...@yuggoth.org 
wrote:

On 2014-09-15 17:59:10 -0400 (-0400), Jay Pipes wrote:
[...]

 Sometimes it's pretty hard to determine whether something in the
 E-R check page is due to something in the infra scripts, some
 transient issue in the upstream CI platform (or part of it), or
 actually a bug in one or more of the OpenStack projects.

[...]

Sounds like an NP-complete problem, but if you manage to solve it
let me know and I'll turn it into the first line of triage for Infra
bugs. ;)
--
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: retrying)

2014-09-17 Thread Joshua Harlow
On a related and slightly less problematic case is another one like this...

https://github.com/rholder/retrying/issues/11

On Sep 17, 2014, at 8:15 AM, Thomas Goirand z...@debian.org wrote:

 Hi,
 
 I'm horrified by what I just found. I have just found out this in
 glanceclient:
 
  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
 ImportError: No module named packages.urllib3
 
 Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
 Not from requests. The fact that requests is embedding its own version
 of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
 removed from requests.
 
 In Debian, we spend a lot of time to un-vendorize stuff, because
 that's a security nightmare. I don't want to have to patch all of
 OpenStack to do it there as well.
 
 And no, there's no good excuse here...
 
 Thomas Goirand (zigo)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PostgreSQL jobs slow in the gate

2014-09-17 Thread Joshua Harlow
Same seen @ 
http://logs.openstack.org/80/121280/32/check/check-tempest-dsvm-postgres-full/395a05d/logs/postgres.txt.gz

Although I'm not sure if those are abnormal or not (seems likely they wouldn't 
be).

Was there a postgres release?

On Sep 17, 2014, at 5:10 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Postgres is also logging a lot of errors: 
 http://logs.openstack.org/63/122263/1/check/check-tempest-dsvm-postgres-full/2f27252/logs/postgres.txt.gz
 
 On Wed, Sep 17, 2014 at 4:49 PM, Clark Boylan cboy...@sapwetik.org wrote:
 Hello,
 
 Recent sampling of test run times shows that our tempest jobs run
 against clouds using PostgreSQL are significantly slower than jobs run
 against clouds using MySQL.
 
 (check|gate)-tempest-dsvm-full has an average run time of 52.9 minutes
 (stddev 5.92 minutes) over 516 runs.
 (check|gate)-tempest-dsvm-postgres-full has an average run time of 73.78
 minutes (stddev 11.01 minutes) over 493 runs.
 
 I think this is a bug and and an important one to solve prior to release
 if we want to continue to care and feed for PostgreSQL support. I
 haven't filed a bug in LP because I am not sure where the slowness is
 and creating a bug against all the projects is painful. (If there are
 suggestions for how to do this in a non painful way I will happily go
 file a proper bug).
 
 Is there interest in fixing this? If not we should probably reconsider
 removing these PostgreSQL jobs from the gate.
 
 
 ++ to getting someone to own and fix this or drop it from the gate.
  
 Note, a quick spot check indicates the increase in job time is not
 related to job setup. Total time before running tempest appears to be
 just over 18 minutes in the jobs I checked.
 
 Thank you,
 Clark
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Joshua Harlow
Just as an update to what exactly is RHEL python 2.6...

This is the expanded source rpm:

http://paste.ubuntu.com/8405074/

The main one here appears to be:

- python-2.6.6-ordereddict-backport.patch

Full changelog @ http://paste.ubuntu.com/8405082/

Overall I'd personally like to get rid of python 2.6, and move on, but then I'd 
also like to get rid of 2.7 and move on also ;)

- Josh

On Sep 22, 2014, at 11:17 AM, Monty Taylor mord...@inaugust.com wrote:

 On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
 On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
 What about:
 
 https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12
 
 Pulling in ordereddict doesn't do anything if your code doesn't use it
 when OrderedDict isn't in collections, which is the case here.  Further,
 there's no reason that _get_collection_kwargs() needs to use an
 OrderedDict: it's initialized in an arbitrary order (generator
 comprehension over a set), then later passed to functions with **, which
 converts it to a plain old dict.
 
 
 So - as an update to this, this is due to RedHat once again choosing to
 backport features from 2.7 into a thing they have labeled 2.6.
 
 We test 2.6 on Centos6 - which means we get RedHat's patched version of
 Python2.6 - which, it turns out, isn't really 2.6 - so while you might
 want to assume that we're testing 2.6 - we're not - we're testing
 2.6-as-it-appears-in-RHEL.
 
 This brings up a question - in what direction do we care/what's the
 point in the first place?
 
 Some points to ponder:
 
 - 2.6 is end of life - so the fact that this is coming up is silly, we
 should have stopped caring about it in OpenStack 2 years ago at least
 - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
 point of supporting it at all
 - Maybe we ACTUALLY care about 2.6 support across the board, in which
 case we should STOP testing using Centos6 which is not actually 2.6
 
 I vote for just amending our policy right now and killing 2.6 with
 prejudice.
 
 (also, I have heard a rumor that there are people running in to problems
 due to the fact that they are deploying onto a two-release-old version
 of Debian. No offense - but there is no way we're supporting that)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] In defence of faking

2014-09-22 Thread Joshua Harlow
I would have to agree, fakes certainly have there benefits.

IMHO, sometimes when you are testing a complex set of interactions you don't
want to have to mock the full set of interactions when you just want to 
determine if the
end result worked or did not work.

For example, lets say I have a workflow:

A - B - C

Where B is composed of 20 different things (or B could itself be composed of 20 
sub-workflows...).

To use mock here you would have to tightly couple whatever those workflows are 
doing internally
just to make mock work correctly, when in reality you just want to know did A 
- B - C work as expected
with the expected C being created/adjusted/whatever for the given input of A.

A example that I think is useful is one that I have created for faking 
zookeeper:

https://pypi.python.org/pypi/zake*

*mentioned @ http://kazoo.readthedocs.org/en/latest/testing.html#zake

Since its not typically possible to run zookeeper during testing, or to have a 
strong dependency to zookeeper
when running unit tests the above was created to function like a real kazoo 
zookeeper client, making it possible
to trigger the same set of things that a real zookeeper client would trigger 
but without requiring zookeeper, making
it possible to inject data into that fake kazoo client, delete data and test 
what that affects on your code...

This allows users of kazoo to test interactions of complex systems without 
trying to mock out that entire interaction
and without having to setup zookeeper to do the same (coupling your tests to a 
functioning zookeeper application).

TLDR; imho testing the interaction and expected outcome is easier with fakes, 
and doesn't tightly couple you to
an implementation. Mock is great for testing methods, small API's, simple 
checks, returned/raised results, but I believe
the better test is a test that tests an interaction in a complex system for a 
desired result (without requiring that full system to
be setup to test that result) because in the end that complex system is what 
users use as a whole.

My 2 cents.

On Sep 22, 2014, at 12:36 PM, Robert Collins robe...@robertcollins.net wrote:

 On 23 September 2014 03:58, Matthew Booth mbo...@redhat.com wrote:
 If you missed the inaugural OpenStack Bootstrapping Hour, it's here:
 http://youtu.be/jCWtLoSEfmw . I think this is a fantastic idea and big
 thanks to Sean, Jay and Dan for doing this. I liked the format, the
 informal style and the content. Unfortunately I missed the live event,
 but I can confirm that watching it after the event worked just fine
 (thanks for reading out live questions for the stream!).
 
 I'd like to make a brief defence of faking, which perhaps predictably in
 a talk about mock took a bit of a bashing.
 
 Firstly, when not to fake. As Jay pointed out, faking adds an element of
 complexity to a test, so if you can achieve what you need to with a
 simple mock then you should. But, as the quote goes, you should make
 things as simple as possible, but not simpler.
 
 Here are some simple situations where I believe fake is the better solution:
 
 * Mock assertions aren't sufficiently expressive on their own
 
 For example, imagine your code is calling:
 
 def complex_set(key, value)
 
 You want to assert that on completion of your unit, the final value
 assigned to key was value. This is difficult to capture with mock
 without risking false assertion failures if complex_set sets other keys
 which you aren't interested in, or if key's value is set multiple
 times, but you're only interested in the last one. A little fake
 function which stores the final value assigned to key does this simply
 and accurately without adding a great deal of complexity. e.g.
 
 mykey = [None]
 def fake_complex_set(key, value):
  if key == 'FOO':
mykey[0] = value
 
 with mock.patch.object(unit, 'complex_set', side_effect=fake_complex_set):
  run_test()
 self.assertEquals('expected', mykey[0])
 
 Summary: fake method is a custom mock assertion.
 
 * A simple fake function is simpler than a complex mock dance
 
 For example, you're mocking 2 functions: start_frobincating(key) and
 wait_for_frobnication(key). They can potentially be called overlapping
 with different keys. The desired mock return value of one is dependent
 on arguments passed to the other. This is better mocked with a couple of
 little fake functions and some external state, or you risk introducing
 artificial constraints on the code under test.
 
 Jay pointed out that faking methods creates more opportunities for
 errors. For this reason, in the above cases, you want to keep your fake
 function as simple as possible (but no simpler). However, there's a big
 one: the fake driver!
 
 This may make less sense outside of driver code, although faking the
 image service came up in the talk. Without looking at the detail, that
 doesn't necessarily sound awful to me, depending on context. In the
 driver, though, the ultimate measure of correctness isn't a Nova call:
 it's the effect 

Re: [openstack-dev] [Heat] Convergence: Backing up template instead of stack

2014-09-23 Thread Joshua Harlow
I believe heat has its own dependency graph implementation but if that was 
switched to networkx[1] that library has a bunch of nice read/write 
capabilities.

See: https://github.com/networkx/networkx/tree/master/networkx/readwrite

And one made for sqlalchemy @ https://pypi.python.org/pypi/graph-alchemy/

Networkx has worked out pretty well for taskflow (and I believe mistral is also 
using it).

[1] https://networkx.github.io/

Something to think about...

On Sep 23, 2014, at 11:32 AM, Zane Bitter zbit...@redhat.com wrote:

 On 23/09/14 09:44, Anant Patil wrote:
 On 23-Sep-14 09:42, Clint Byrum wrote:
 Excerpts from Angus Salkeld's message of 2014-09-22 20:15:43 -0700:
 On Tue, Sep 23, 2014 at 1:09 AM, Anant Patil anant.pa...@hp.com wrote:
 
 Hi,
 
 One of the steps in the direction of convergence is to enable Heat
 engine to handle concurrent stack operations. The main convergence spec
 talks about it. Resource versioning would be needed to handle concurrent
 stack operations.
 
 As of now, while updating a stack, a backup stack is created with a new
 ID and only one update runs at a time. If we keep the raw_template
 linked to it's previous completed template, i.e. have a back up of
 template instead of stack, we avoid having backup of stack.
 
 Since there won't be a backup stack and only one stack_id to be dealt
 with, resources and their versions can be queried for a stack with that
 single ID. The idea is to identify resources for a stack by using stack
 id and version. Please let me know your thoughts.
 
 
 Hi Anant,
 
 This seems more complex than it needs to be.
 
 I could be wrong, but I thought the aim was to simply update the goal 
 state.
 The backup stack is just the last working stack. So if you update and there
 is already an update you don't need to touch the backup stack.
 
 Anyone else that was at the meetup want to fill us in?
 
 
 The backup stack is a device used to collect items to operate on after
 the current action is complete. It is entirely an implementation detail.
 
 Resources that can be updated in place will have their resource record
 superseded, but retain their physical resource ID.
 
 This is one area where the resource plugin API is particularly sticky,
 as resources are allowed to raise the replace me exception if in-place
 updates fail. That is o-k though, at that point we will just comply by
 creating a replacement resource as if we never tried the in-place update.
 
 In order to facilitate this, we must expand the resource data model to
 include a version. Replacement resources will be marked as current and
 to-be-removed resources marked for deletion. We can also keep all current
 - 1 resources around to facilitate rollback until the stack reaches a
 complete state again. Once that is done, we can remove the backup stack.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Backup stack is a good way to take care of rollbacks or cleanups after
 the stack action is complete. By cleanup I mean the deletion of
 resources that are no longer needed after the new update. It works very
 well when one engine is processing the stack request and the stacks are
 in memory.
 
 It's actually a fairly terrible hack (I wrote it ;)
 
 It doesn't work very well because in practice during an update there are 
 dependencies that cross between the real stack and the backup stack (due to 
 some resources remaining the same or being updated in place, while others are 
 moved to the backup stack ready for replacement). So in the event of a 
 failure that we don't completely roll back on the spot, we lose some 
 dependency information.
 
 As a step towards distributing the stack request processing and making
 it fault-tolerant, we need to persist the dependency task graph. The
 backup stack can also be persisted along with the new graph, but then
 the engine has to traverse both the graphs to proceed with the operation
 and later identify the resources to be cleaned-up or rolled back using
 the stack id. There would be many resources for the same stack but
 different stack ids.
 
 Right, yeah this would be a mistake because in reality there is only one 
 graph, so that's how we need to model it internally.
 
 In contrast, when we store the current dependency task graph(from the
 latest request) in DB, and version the resources, we can identify those
 resources that need to be rolled-back or cleaned up after the stack
 operations is done, by comparing their versions. With versioning of
 resources and template, we can avoid creating a deep stack of backup
 stacks. The processing of stack operation can happen from multiple
 engines, and IMHO, it is simpler when all the engines just see one stack
 and versions of resources, instead of seeing many stacks with many
 resources for each stack.
 
 Bingo.
 
 I think all you need to do is record in the resource the 

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Joshua Harlow
Sounds like an interesting project/goal and will be interesting to see where 
this goes.

A few questions/comments:

How much golang will people be exposed to with this addition?

Seeing that this could be the first 'go' using project it will be interesting 
to see where this goes (since afaik none of the infra support exists, and 
people aren't likely to familiar with go vs python in the openstack community 
overall).

What's your thoughts on how this will affect the existing openstack container 
effort?

I see that kubernetes isn't exactly a small project either (~90k LOC, for those 
who use these types of metrics), so I wonder how that will affect people 
getting involved here, aka, who has the resources/operators/other... available 
to actually setup/deploy/run kubernetes, when operators are likely still just 
struggling to run openstack itself (at least operators are getting used to the 
openstack warts, a new set of kubernetes warts could not be so helpful).

On Sep 23, 2014, at 3:40 PM, Steven Dake sd...@redhat.com wrote:

 Hi
 folks,
 
 
   I'm pleased to announce the development of a new project Kolla
   which is Greek for glue :). Kolla has a goal of providing an
   implementation that deploys OpenStack using Kubernetes and
   Docker. This project will begin as a StackForge project
   separate from the TripleO/Deployment program code base. Our
   long term goal is to merge into the TripleO/Deployment program
   rather then create a new program.
 
 
 
 
 
 Docker
   is a container technology for delivering hermetically sealed
   applications and has about 620 technical contributors [1]. We
   intend to produce docker images for a variety of platforms
   beginning with Fedora 20. We are completely open to any distro
   support, so if folks want to add new Linux distribution to
   Kolla please feel free to submit patches :)
 
 
 
 Kubernetes
   at the most basic level is a Docker scheduler produced by and
   used within Google [2]. Kubernetes has in excess of 100
   technical contributors. Kubernetes is more then just a
   scheduler, it provides additional functionality such as load
   balancing and scaling and has a significant roadmap.
 
 
 
 
   The #tripleo channel on Freenode will be used for Kolla
   developer and user communication. Even though we plan to
   become part of the Deployment program long term, as we experiment
   we believe it is best to hold a separate weekly one hour IRC
   meeting on Mondays at 2000 UTC in #openstack-meeting [3].
 
 
 
 
   This project has been discussed with the current TripleO PTL
   (Robert Collins) and he seemed very supportive and agreed with
   the organization of the project outlined above. James
 Slagle, a TripleO core developer, has kindly offered to
 liase between Kolla and the broader TripleO community. 
 
   
 
 
   
 I
   personally feel it is necessary to start from a nearly empty
   repository when kicking off a new project. As a result, there
   is limited code in the repository [4] at this time. I suspect
   folks will start cranking out a kick-ass implementation once
   the Kolla/Stackforge integration support is reviewed by the
   infra team [5].
 
 
 
 The
   initial core team is composed of Steven Dake, Ryan Hallisey,
   James Lebocki, Jeff Peeler, James Slagle, Lars Kellogg-Sedman,
   and David Vossel. The core team will be reviewed every 6 weeks
   to add fresh developers.
 
 
 
 
   Please join the core team in designing and inventing this
   rockin' new technology!
 
 
 
 
   Regards
 
   -steve
 
 
 
 ~~
 
 [1]
 https://github.com/docker/docker
 [2] https://github.com/GoogleCloudPlatform/kubernetes
 
   
 [3]
 https://wiki.openstack.org/wiki/Meetings/Kolla
 [4] https://github.com/jlabocki/superhappyfunshow
 [5] https://review.openstack.org/#/c/122972/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tomorrow meeting at 2000 UTC

2013-12-11 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tomorrow, 
2013-12-12!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss any current integration work (or problems) or help needed.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, questions and answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Neutron] How do we know a host is ready to have servers scheduled onto it?

2013-12-12 Thread Joshua Harlow
Maybe time to revive something like:

https://review.openstack.org/#/c/12759/


From experience, all sites (and those internal to yahoo) provide a /status
(or equivalent) that is used for all sorts of things (from basic
load-balancing up/down) to other things like actually introspecting the
state of the process (or to get basics about what the process is doing).
Typically this is not exposed to the public (its why
http://www.yahoo.com/status works for me but not for u). It seems like
something like that could help (but of course not completely solve) the
type of response jay mentioned.

-Josh

On 12/12/13 10:10 AM, Jay Pipes jaypi...@gmail.com wrote:

On 12/12/2013 12:53 PM, Kyle Mestery wrote:
 On Dec 12, 2013, at 11:44 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/12/2013 12:36 PM, Clint Byrum wrote:
 Excerpts from Russell Bryant's message of 2013-12-12 09:09:04 -0800:
 On 12/12/2013 12:02 PM, Clint Byrum wrote:
 I've been chasing quite a few bugs in the TripleO automated bring-up
 lately that have to do with failures because either there are no
valid
 hosts ready to have servers scheduled, or there are hosts listed and
 enabled, but they can't bind to the network because for whatever
reason
 the L2 agent has not checked in with Neutron yet.

 This is only a problem in the first few minutes of a nova-compute
host's
 life. But it is critical for scaling up rapidly, so it is important
for
 me to understand how this is supposed to work.

 So I'm asking, is there a standard way to determine whether or not a
 nova-compute is definitely ready to have things scheduled on it?
This
 can be via an API, or even by observing something on the
nova-compute
 host itself. I just need a definitive signal that the compute host
is
 ready.

 If a nova compute host has registered itself to start having
instances
 scheduled to it, it *should* be ready.  AFAIK, we're not doing any
 network sanity checks on startup, though.

 We already do some sanity checks on startup.  For example,
nova-compute
 requires that it can talk to nova-conductor.  nova-compute will
block on
 startup until nova-conductor is responding if they happened to be
 brought up at the same time.

 We could do something like this with a networking sanity check if
 someone could define what that check should look like.

 Could we ask Neutron if our compute host has an L2 agent yet? That
seems
 like a valid sanity check.

 ++

 This makes sense to me as well. Although, not all Neutron plugins have
 an L2 agent, so I think the check needs to be more generic than that.
 For example, the OpenDaylight MechanismDriver we have developed
 doesn't need an agent. I also believe the Nicira plugin is agent-less,
 perhaps there are others as well.

 And I should note, does this sort of integration also happen with
cinder,
 for example, when we're dealing with storage? Any other services which
 have a requirement on startup around integration with nova as well?

Right, it's more general than is the L2 agent alive and running. It's
more about having each service understand the relative dependencies it
has on other supporting services.

For instance, have each service implement a:

GET /healthcheck

that would return either a 200 OK or 409 Conflict with the body
containing a list of service types that it is waiting to hear back from
in order to provide a 200 OK for itself.

Anyway, just some thoughts...

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re-initializing or dynamically configuring cinder driver

2013-12-15 Thread Joshua Harlow
It depends on the corruption that u are willing to tolerate. Sigterm means 
the process just terminates, what if said process was 3/4 through some 
operation (create_volume for example)??

Personally I am willing to tolerate zero corruption, reliability and 
consistency are foundational things for me. Others may be more tolerant though, 
seems worth further discussion IMHO.

Sent from my really tiny device...

On Dec 15, 2013, at 8:39 PM, iKhan 
ik.ibadk...@gmail.commailto:ik.ibadk...@gmail.com wrote:

How about sending SIGTERM to child processes and then starting them? I know 
this is the hard way of achieving the objective and SIGHUP approach will handle 
it more gracefully. As you mentioned it is a major change, tentatively can we 
use SIGTERM to achieve the objective?


On Mon, Dec 16, 2013 at 9:50 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
In your proposal does it means that the child process will be restarted (that 
means kill -9 or sigint??). If so, without taskflow to help (or other solution) 
that means operations in progress will be corrupted/lost. That seems bad...

A SIGHUP approach could be handled more gracefully (but it does require some 
changes in the underlying codebase to do this refresh).


Sent from my really tiny device...

On Dec 15, 2013, at 3:11 AM, iKhan 
ik.ibadk...@gmail.commailto:ik.ibadk...@gmail.com wrote:

I don't know if this is being planned in Icehouse, if not probably proposing an 
approach will help. We have seen cinder-volume service initialization part. 
Similarly if we can get our hands on child process that are running under 
cinder-volume service, if we terminate those process and restart them along 
with newly added backends. It might help us achieve the target.


On Sun, Dec 15, 2013 at 12:49 PM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
I don't currently know of a one size fits all solution here. There was talk at 
the summit of having the cinder app respond to a SIGHUP signal and attempting 
to reload config on this signal. Dynamic reloading is tricky business 
(basically u need to unravel anything holding references to the old config 
values/affected by the old config values).

I would start with a simple trial of this if u want to so it, part if the issue 
will likely be oslo.config (can that library understand dynamic reloading?) and 
then cinder drivers themselves (perhaps u need to create a registry of drivers 
that can dynamically reload on config reloads?). Start out with something 
simple, isolate the reloading as much as u can to a single area (something like 
the mentioned registry of objects that can be reloaded when a SIGHUP arrives) 
and see how it goes.

It does seem like a nice feature if u can get it right :-)

Sent from my really tiny device...

On Dec 13, 2013, at 8:57 PM, iKhan 
ik.ibadk...@gmail.commailto:ik.ibadk...@gmail.com wrote:

Hi All,

At present cinder driver can be only configured with adding entries in conf 
file. Once these driver related entries are modified or added in conf file, we 
need to restart cinder-volume service to validate the conf entries and create a 
child process that runs in background.

I am thinking of a way to re-initialize or dynamically configure cinder driver. 
So that I can accept the configuration from user on fly and perform operations. 
I think solution lies somewhere around oslo.config.cfg, but I am still 
unclear about how re-initializing can be achieved.

Let know if anyone here is aware of any approach to re-initialize or 
dynamically configure a driver.

--
Thanks,
IK
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,
Ibad Khan
9686594607tel:9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re-initializing or dynamically configuring cinder driver

2013-12-16 Thread Joshua Harlow
Ah, u might be able to do what u said. Try it out and see how far u can get :)

I would be interested to know how u plan on waiting for all existing operations 
to finish. Maybe it's not so hard, not really sure...

Sent from my really tiny device...

On Dec 15, 2013, at 9:43 PM, iKhan 
ik.ibadk...@gmail.commailto:ik.ibadk...@gmail.com wrote:

Ok, I though we can make make cinder-volume aware of SIGTERM call and make sure 
it terminates with cleaning all the existing operations. If its not possible 
then probably SIGHUB is the only solution. :(


On Mon, Dec 16, 2013 at 10:25 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
It depends on the corruption that u are willing to tolerate. Sigterm means 
the process just terminates, what if said process was 3/4 through some 
operation (create_volume for example)??

Personally I am willing to tolerate zero corruption, reliability and 
consistency are foundational things for me. Others may be more tolerant though, 
seems worth further discussion IMHO.


Sent from my really tiny device...

On Dec 15, 2013, at 8:39 PM, iKhan 
ik.ibadk...@gmail.commailto:ik.ibadk...@gmail.com wrote:

How about sending SIGTERM to child processes and then starting them? I know 
this is the hard way of achieving the objective and SIGHUP approach will handle 
it more gracefully. As you mentioned it is a major change, tentatively can we 
use SIGTERM to achieve the objective?


On Mon, Dec 16, 2013 at 9:50 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
In your proposal does it means that the child process will be restarted (that 
means kill -9 or sigint??). If so, without taskflow to help (or other solution) 
that means operations in progress will be corrupted/lost. That seems bad...

A SIGHUP approach could be handled more gracefully (but it does require some 
changes in the underlying codebase to do this refresh).


Sent from my really tiny device...

On Dec 15, 2013, at 3:11 AM, iKhan 
ik.ibadk...@gmail.commailto:ik.ibadk...@gmail.com wrote:

I don't know if this is being planned in Icehouse, if not probably proposing an 
approach will help. We have seen cinder-volume service initialization part. 
Similarly if we can get our hands on child process that are running under 
cinder-volume service, if we terminate those process and restart them along 
with newly added backends. It might help us achieve the target.


On Sun, Dec 15, 2013 at 12:49 PM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
I don't currently know of a one size fits all solution here. There was talk at 
the summit of having the cinder app respond to a SIGHUP signal and attempting 
to reload config on this signal. Dynamic reloading is tricky business 
(basically u need to unravel anything holding references to the old config 
values/affected by the old config values).

I would start with a simple trial of this if u want to so it, part if the issue 
will likely be oslo.config (can that library understand dynamic reloading?) and 
then cinder drivers themselves (perhaps u need to create a registry of drivers 
that can dynamically reload on config reloads?). Start out with something 
simple, isolate the reloading as much as u can to a single area (something like 
the mentioned registry of objects that can be reloaded when a SIGHUP arrives) 
and see how it goes.

It does seem like a nice feature if u can get it right :-)

Sent from my really tiny device...

On Dec 13, 2013, at 8:57 PM, iKhan 
ik.ibadk...@gmail.commailto:ik.ibadk...@gmail.com wrote:

Hi All,

At present cinder driver can be only configured with adding entries in conf 
file. Once these driver related entries are modified or added in conf file, we 
need to restart cinder-volume service to validate the conf entries and create a 
child process that runs in background.

I am thinking of a way to re-initialize or dynamically configure cinder driver. 
So that I can accept the configuration from user on fly and perform operations. 
I think solution lies somewhere around oslo.config.cfg, but I am still 
unclear about how re-initializing can be achieved.

Let know if anyone here is aware of any approach to re-initialize or 
dynamically configure a driver.

--
Thanks,
IK
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,
Ibad Khan
9686594607tel:9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [State-Management] Agenda for today meeting at 2000 UTC

2013-12-19 Thread Joshua Harlow
Hi all,


The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is today, 
2013-12-19!!!


As usual, everyone is welcome :-)


Link: https://wiki.openstack.org/wiki/Meetings/StateManagement

Taskflow: https://wiki.openstack.org/TaskFlow


## Agenda (30-60 mins):


- Discuss any action items from last meeting.

- Discuss about any other potential new use-cases for said library.

- Discuss about any other ideas, questions and answers (and more!).


Any other topics are welcome :-)


See you all soon!


--


Joshua Harlow


It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] No meeting this week

2013-12-23 Thread Joshua Harlow
Since its xmas in most of the world lets skip the IRC meeting this week 
(normally on thursdays).

See you all soon and have a great vacation!

P.S. #openstack-state-management if u feel the need to chat :-)

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][mistral] scheduled tasks

2013-12-30 Thread Joshua Harlow
Any reason to not use taskflow (https://wiki.openstack.org/wiki/TaskFlow) to 
help u here??

I think it could be easily adapted to do what u want, and would save u from 
having to recreate the same task execution logic that everyone seems to rebuild…

-Josh

From: Greg Hill greg.h...@rackspace.commailto:greg.h...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, December 30, 2013 at 9:59 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove][mistral] scheduled tasks

I've begun working on the scheduled tasks feature that will allow automated 
backups (and other things) in trove.

Here's the blueprint:
https://wiki.openstack.org/wiki/Trove/scheduled-tasks

I've heard some mention that mistral might be an option rather than building 
something into trove.  I did some research and it seems like it *might* be a 
good fit, but it also seems like a bit of overkill for something that could be 
built in to trove itself pretty easily.  There's also the security concern of 
having to give mistral access to the trove management API in order to allow it 
to fire off backups and other tasks on behalf of users, but maybe that's just 
my personal paranoia and it's really not much of a concern.

My current plan is to not use mistral, at least for the original 
implementation, because it's not yet ready and we have a fairly urgent need for 
the functionality.  We could make it an optional feature later for people who 
are running mistral and want to use it for this purpose.

I'd appreciate any and all feedback before I get too far along.

Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][mistral] scheduled tasks

2013-12-31 Thread Joshua Harlow
Agreed taskflow doesn't currently provide scheduling as it was thought that 
reliable execution that can be restarted and resumed is the foundation that 
someone using taskflow can easily provide scheduling ontop of... Better IMHO to 
have a project doing this foundation well (since openstack would benefit from 
this) than try to pack so many features in that it does none of them well (this 
kind of kitchen sink approach seems to happen more often than not, sadly).

But in reality it's all about compromises and finding the solution that makes 
sense and works, and happy new year!! :P

Sent from my really tiny device...

On Dec 30, 2013, at 9:03 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

Greg,

Georgy is right. We’re now actively working on PoC and we’ve already 
implemented the functionality we initially planned, including cron-based 
scheduling. You can take a look at our repo and evaluate what we’ve done, we’d 
be very glad to hear some feedback from anyone potentially interested in 
Mistral. We were supposed to deliver PoC in the end of December, however, we 
decided not to rush and include several really cool things that we came up with 
while working on PoC, they should demonstrate the whole idea of Mistral much 
better and expose functionality for more potential use cases. A couple of days 
ago I sent out the information about additional changes in DSL that we want to 
implement (etherpad: https://etherpad.openstack.org/p/mistral-poc), so if you’d 
like please join the discussion and let us know how we can evolve the project 
to better fit your needs. In fact, even though we call it PoC it’s already in a 
good shape and pretty soon (~1.5 month) is going to be mature enough to use it 
as a dependency for other projects.

As far as security, we thought about this and and we have a vision of how it 
could be implemented. Generally, later on we’re planning to implement sort of 
Role Based Access Control (RBAC) to, first of all, isolate user workbooks 
(definition of tasks, actions, events) from each other and deal with access 
patterns to OpenStack services. We would encourage you to file a BP with a 
description of what would be needed by Trove in that regard.

I looked at https://wiki.openstack.org/wiki/Trove/scheduled-tasks and at the 
first glance Mistral looks a good fit here, especially if you’re interested in 
a standalone REST service with its capabilities like execution monitoring, 
history, language independence and HA (i.e. you schedule backups via Mistral 
and Trove itself shouldn’t care about availability of any functionality related 
to scheduling). TaskFlow may also be helpful in case your scheduled jobs are 
representable as flows using one of TaskFlow patterns. However, in my 
understanding you’ll have to implement scheduling yourself since TaskFlow does 
not support it now, at least I didn’t find anything like that (Joshua can 
provide more details on that).

Thanks.

Renat Akhmerov
@Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-03 Thread Joshua Harlow
So taskflow was tested with the version of sqlalchemy that was available and in 
the requirements at the time of its 0.1 release (taskflow syncs it's 
requirements from the same global requirements). From what I remember this is 
the same requirement that everyone else is bound to:

SQLAlchemy=0.7.8,=0.7.99

I can unpin it if this is desired (the openstack requirements repo has the same 
version restriction). What would be recommended here? As more code moves to 
pypi reusable libraries (oslo.db when it arrives comes to mind) I think this 
will be hit more often. Let's come up with a good strategy to follow.

Thoughts??

Sent from my really tiny device...

On Jan 3, 2014, at 7:30 AM, Sean Dague 
s...@dague.netmailto:s...@dague.net wrote:

Given that sqla 0.9 just came out, I wanted to explore, again, what the state 
of the world was with sqla 0.8 (especially given that Ubuntu and Red Hat are 
both shipping 0.8 in their OpenStack bundles) - 
https://review.openstack.org/#/c/64831/

The answer is not great. But more importantly, the answer is actually worse 
than the last time we checked, which I think gives this some urgency to move 
forward.

One of the key problems is taskflow, which has an sqla pin, which breaks all 
the cinder entry points. This was actually entirely the problem global 
requirements was meant to address, but it really can't when there are nested 
requirements like this, in stackforge projects that aren't installing via git 
(so we can rewrite their requirements).

So why does taskflow have the SQLA pin? And if the answer is global 
requirements, then taskflow can't be installing via pypi like it is now, 
because it's by nature taking a wedge. So we need a real solution here to un 
bind us, because right now it's actually impossible to upgrade sqla in 
requirements because of taskflow.

   -Sean

--
Sean Dague
Samsung Research America
s...@dague.netmailto:s...@dague.net / 
sean.da...@samsung.commailto:sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-03 Thread Joshua Harlow
Ok, I think I'm fine with that (although not really sure what that
entails).

What does the living under the 'oslo program' change?

Does that entail getting sucked into the incubator (which seems to be what
your graduating link is about).

I don't think its a good idea for taskflow to be in the 'incubator'.
Taskflow is meant to be just like any other 3rd party library.

Or were u mainly referring to the 'devstack-gate integration' section?

I'd be interested in hearing dougs opinion here (cc'd him) as
https://etherpad.openstack.org/p/icehouse-oslo-splitting-the-incubator
would seem to cause even more of these types of new 3rd party libraries to
appear on pypi (and therefore causing similar issues of transitive
dependencies as taskflow).

Will bug u on #openstack-infra soon :-)

On 1/3/14, 9:05 AM, Sean Dague s...@dague.net wrote:

On 01/03/2014 11:37 AM, Joshua Harlow wrote:
 So taskflow was tested with the version of sqlalchemy that was available
 and in the requirements at the time of its 0.1 release (taskflow syncs
 it's requirements from the same global requirements). From what I
 remember this is the same requirement that everyone else is bound to:

 SQLAlchemy=0.7.8,=0.7.99

 I can unpin it if this is desired (the openstack requirements repo has
 the same version restriction). What would be recommended here? As more
 code moves to pypi reusable libraries (oslo.db when it arrives comes to
 mind) I think this will be hit more often. Let's come up with a good
 strategy to follow.

 Thoughts??

So I think that given taskflow's usage, it really needs to live under
the Oslo program, and follow the same rules that are applied to oslo
libraries. (https://wiki.openstack.org/wiki/Oslo#Graduation)

Which means it needs to be part of the integrated gate, so we can update
it's requirements globally. It also means that changes to it will be
gated on full devstack runs.

We can work through the details on #openstack-infra. ttx has been doing
the same for oslo.rootwrap this week.

   -Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-03 Thread Joshua Harlow
Sounds good to me. 

Talked on #openstack-infra with some folks there and just awaiting next
steps.

Doesn't seem like should be anything to hard to adjust/move/...

-Josh

On 1/3/14, 11:27 AM, Sean Dague s...@dague.net wrote:

On 01/03/2014 12:45 PM, Joshua Harlow wrote:
 Ok, I think I'm fine with that (although not really sure what that
 entails).

 What does the living under the 'oslo program' change?

 Does that entail getting sucked into the incubator (which seems to be
what
 your graduating link is about).

 I don't think its a good idea for taskflow to be in the 'incubator'.
 Taskflow is meant to be just like any other 3rd party library.

I didn't mean the incubator, I meant like oslo.* libs that we've spun
out already.

 Or were u mainly referring to the 'devstack-gate integration' section?

Correct. Just to understand what the libs live under. Basically taskflow
is getting deeply integrated into projects in the same way oslo.* libs
are, and as such, given it has non trivial requirements of it's own, we
have to treat it like all the other OpenStack components and
symmetrically gate on it.

That will guaruntee you can't release a taskflow library that can break
OpenStack, because we'll be testing every commit, which is goodness.

 I'd be interested in hearing dougs opinion here (cc'd him) as
 https://etherpad.openstack.org/p/icehouse-oslo-splitting-the-incubator
 would seem to cause even more of these types of new 3rd party libraries
to
 appear on pypi (and therefore causing similar issues of transitive
 dependencies as taskflow).

 Will bug u on #openstack-infra soon :-)

Definitely think doug should weigh in as well.

   -Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-03 Thread Joshua Harlow
I'd be up for that 'dual' gating. It would help make sure that nothing major is 
breaking in the next version as well as the version released on pypi isn't also 
causing problems.

Although git head gating does seem odd, especially as git head/trunk is where 
things change (and should be allowed to break and change, change is good) and 
the level of false positives that would be raised might become to much of a 
pain. I'd rather not discourage innovation on trunk if we can; since this is 
what stable releases are for.

Btw the sqlalchemy changes (unpinning) should be fine.

Ivan did a couple of tests (basic stuff, not full integration).

- https://review.openstack.org/#/c/64869/
- https://review.openstack.org/#/c/64881/

Both passed basic unit tests (full integration against real mysql, not sqlite 
will happen soon I hope with https://bugs.launchpad.net/taskflow/+bug/1265886).

If we really need to I can push out a 0.1.2 with the unpinned version (one of 
the above reviews).

-Josh

From: Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
Date: Friday, January 3, 2014 at 11:44 AM
To: Joshua Harlow harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Sean Dague s...@dague.netmailto:s...@dague.net
Subject: Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 
upgrade




On Fri, Jan 3, 2014 at 12:45 PM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
Ok, I think I'm fine with that (although not really sure what that
entails).

What does the living under the 'oslo program' change?

Does that entail getting sucked into the incubator (which seems to be what
your graduating link is about).

I don't think its a good idea for taskflow to be in the 'incubator'.
Taskflow is meant to be just like any other 3rd party library.

No, as we discussed in Hong Kong, there's no reason to add taskflow to the 
incubator.

Whether or not it needs to be part of the oslo program (or any other program) 
is a separate question. I'm not opposed to bringing it in, but didn't see the 
point when it came up at the summit.

I understand that moving taskflow into oslo would avoid the policy decision we 
have in place to not do symmetric gating on unreleased versions of things not 
owned by the OpenStack project. However, I don't know if we want to be 
testing against the git head of libraries no matter where they live. As fungi 
pointed out on IRC, gating against pre-release versions of libraries may allow 
us to reach a state where the software works when installed from git, but not 
from the released packages.

It seems safer to gate changes to libraries against the apps' trunk (to avoid 
making backwards-incompatible changes), and then gate changes to the apps 
against the released libraries (to ensure they work with something available to 
be packaged by the distros). There are lots and lots of version numbers 
available to us, so I see no problem with releasing new versions of libraries 
frequently.

Am I missing something that makes that not work?

Doug



Or were u mainly referring to the 'devstack-gate integration' section?

I'd be interested in hearing dougs opinion here (cc'd him) as
https://etherpad.openstack.org/p/icehouse-oslo-splitting-the-incubator
would seem to cause even more of these types of new 3rd party libraries to
appear on pypi (and therefore causing similar issues of transitive
dependencies as taskflow).

Will bug u on #openstack-infra soon :-)

On 1/3/14, 9:05 AM, Sean Dague s...@dague.netmailto:s...@dague.net wrote:

On 01/03/2014 11:37 AM, Joshua Harlow wrote:
 So taskflow was tested with the version of sqlalchemy that was available
 and in the requirements at the time of its 0.1 release (taskflow syncs
 it's requirements from the same global requirements). From what I
 remember this is the same requirement that everyone else is bound to:

 SQLAlchemy=0.7.8,=0.7.99

 I can unpin it if this is desired (the openstack requirements repo has
 the same version restriction). What would be recommended here? As more
 code moves to pypi reusable libraries (oslo.db when it arrives comes to
 mind) I think this will be hit more often. Let's come up with a good
 strategy to follow.

 Thoughts??

So I think that given taskflow's usage, it really needs to live under
the Oslo program, and follow the same rules that are applied to oslo
libraries. (https://wiki.openstack.org/wiki/Oslo#Graduation)

Which means it needs to be part of the integrated gate, so we can update
it's requirements globally. It also means that changes to it will be
gated on full devstack runs.

We can work through the details on #openstack-infra. ttx has been doing
the same for oslo.rootwrap this week.

   -Sean

--
Sean Dague
Samsung Research America
s...@dague.netmailto:s...@dague.net / 
sean.da...@samsung.commailto:sean.da...@samsung.com
http

Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-03 Thread Joshua Harlow
Since the library model is what most everyone else uses outside of
openstack (I assume?) what can we do to get there so that this model works
as well?

Expanding dependencies recursively seems like it could help? This could
then detect transitive dependency issues (and doesn't seem so hard to do).

I like the idea of 'gate on all the things' (that I've heard come up
before) but I don't know if its possible? If we hooked into the pypi
upload 'stream' it would seem like we could automatically trigger
openstack builds when a known dependency (or dependency of a
dependency...) is uploaded (maybe?). That would be pretty neat.

In general it really seems like having more libraries, not less is ideal
(making us solve this issue whether we want to or not really). As
libraries can then be used outside of openstack (taskflow is already being
used elsewhere also), libraries create well-defined apis and boundaries
between programs (...). I know they also create this dependency hell
problem (anvil has been hitting these same issues for a while to). But I
think we can figure out a solution that fits both worlds, the world of
things we can gate on and the world of things we can't (3rd party
libraries that aren't under openstack control). Taskflow is in-between
those worlds, so it allows us to explore how to make both of those worlds
work.

-Josh

On 1/3/14, 2:54 PM, Sean Dague s...@dague.net wrote:

On 01/03/2014 05:10 PM, Ivan Melnikov wrote:
 On 04.01.2014 01:29, Sean Dague wrote:
 On 01/03/2014 04:17 PM, Doug Hellmann wrote:
 [...]
 That's what made me think of the solution. But isn't setuptools in
fact
 telling us that somehow the versions of things we expected to have
 installed are no longer installed and so something *is* broken (even
if
 the versions of the installed libraries work together).

 It actually tells us that a human, somewhere, decided that their
 software did not work with some combination of other software, and that
 we are no longer able to correct their mistaken assumptions.
 [...]

 But sometimes humans are not wrong. For example, no released TaskFlow
 version really works with SQLAlchemy = 0.8 -- that was fixed only
 recently (https://review.openstack.org/#/c/62661/).

 I consider requirements to be part of the code, so if they are too
 strict (or too broad), that must be fixed, in a usual opensource way:
 via filing bug report, suggesting patch and so on.

 Requirements should reflect reality, especially for libraries that are
 intended to be useful outside of OpenStack, too. For example, current
 TaskFlow requirement for SQLAlchemy is too strict, so we'll fix that,
 and release new version with that fix.

Well part of the problem is because of it being a dependency of a
dependency, our global requirements process couldn't explore whether or
not it functioned in a real environment.

Which brings us back to the idea of making this a project that works in
the integrated gate.

It's also kind of problematic that apparently the introduction of
taskflow actually caused a regression from Havana (which the distros
managed to make work with sqla 0.8 even though it wasn't in global
requirements, but this apparently would have broken).

And the next question is when is a sqla 0.9 compatible version going to
be out there? Because we can't even explore allowing openstack to use
sqla 0.9 until taskflow does, again because it's a blocking requirement.

It's exactly these kind of lock step problems that we avoid with the
rest of openstack by making it an integrated gate with global
requirements sync. But that we can't really handle with the library model.

   -Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-03 Thread Joshua Harlow
So another idea that was talked about on IRC.

Taskflow exposes entrypoints for these storage backends (like your storage
callback/interface idea).

It currently provides 3 such 'default' backends [sqlalchemy, file/dir
based, in-memory -- mainly for testing].

A 4th one is in progress for icehouse (zookeeper based).

These backends are used to store intermediary 'flow/task' state (allowing
the taskflow engine to resume if an app crashes/stopped while doing stuff).

A couple ideas about this, since its already pluggable.

Split the sqlalchemy backend - 'taskflow-sa' repo/new package. For those
projects that want to use this backend, they include it (still means
'taskflow-sa' package has a dependency on sqlalchemy of some version).
Another idea is to move the sqlalchemy dependency currently in taskflow
requirements.txt - taskflow test-requirements.txt and for those projects
that want to use the sqlalchemy backend, they include the sqlalchemy
version themselves in there requirements.txt (taskflow keeps it in
test-requirements.txt so that it can run its unit/integration tests,
making sure the backend still works).

I'm not really sure which is the best, none seem super-great, but
sometimes u just work with what u got :-P

On 1/3/14, 3:32 PM, Sean Dague s...@dague.net wrote:

On 01/03/2014 06:14 PM, Joshua Harlow wrote:
 Since the library model is what most everyone else uses outside of
 openstack (I assume?) what can we do to get there so that this model
works
 as well?

 Expanding dependencies recursively seems like it could help? This could
 then detect transitive dependency issues (and doesn't seem so hard to
do).

We actually talked about having pip be able to help us here with a
--requirements-override piece of function. dstufft seemed positive on
the concept.

 I like the idea of 'gate on all the things' (that I've heard come up
 before) but I don't know if its possible? If we hooked into the pypi
 upload 'stream' it would seem like we could automatically trigger
 openstack builds when a known dependency (or dependency of a
 dependency...) is uploaded (maybe?). That would be pretty neat.
 
 In general it really seems like having more libraries, not less is ideal
 (making us solve this issue whether we want to or not really). As
 libraries can then be used outside of openstack (taskflow is already
being
 used elsewhere also), libraries create well-defined apis and boundaries
 between programs (...). I know they also create this dependency hell
 problem (anvil has been hitting these same issues for a while to). But I
 think we can figure out a solution that fits both worlds, the world of
 things we can gate on and the world of things we can't (3rd party
 libraries that aren't under openstack control). Taskflow is in-between
 those worlds, so it allows us to explore how to make both of those
worlds
 work.

In general I agree however, if you get between OpenStack and SQLA
you've now touched the 3rd rail. Because we have deep experience about
how bad the compatibility between versions can be, and we can't be
beholden to another project about our SQLA upgrade timeframe.

So I think that generalities aside, if are a library, and use SQLA, we
probably need to really think about putting it in the integrated gate.

Because otherwise what we are saying is taskflow is completely dictating
the SQLA version in the Icehouse release of OpenStack. And that's the
wrong place for that decision to be.

If taskflow worked with a storage callback mechanism, and got a storage
interface from the program that was calling it, then things would be
much different. But because it's going straight to the database and
managing tables directly, through a known unstable library, that
OpenStack itself needs some control over, it's definitely a different
case.

   -Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-04 Thread Joshua Harlow
Such a bad state seems like FUD.

Taskflow was just syncing its requirements with the same requirements that 
everyone else is... Those global requirements have 0.7.99 in them as we speak 
(which is why taskflow picked up that version). 

The issue here will be worked through and fixed, it won't be the last time a 
library that is used in various projects causes dependency issues, so we are 
working through this process as we learn what works best. 1, don't sync with 
that requirements file to attempt to use the same version as integrating 
projects, or become more integrated with the gate... or a few other solutions 
that have been discussed...

New release will happen early next week of taskflow with adjusted sqlalchemy 
upper bound.

Sent from my really tiny device...

 On Jan 4, 2014, at 7:27 AM, Thomas Goirand z...@debian.org wrote:
 
 On 01/04/2014 06:10 AM, Ivan Melnikov wrote:
 On 04.01.2014 01:29, Sean Dague wrote:
 On 01/03/2014 04:17 PM, Doug Hellmann wrote:
 [...]
 That's what made me think of the solution. But isn't setuptools in fact
 telling us that somehow the versions of things we expected to have
 installed are no longer installed and so something *is* broken (even if
 the versions of the installed libraries work together).
 
 It actually tells us that a human, somewhere, decided that their
 software did not work with some combination of other software, and that
 we are no longer able to correct their mistaken assumptions.
 [...]
 
 But sometimes humans are not wrong. For example, no released TaskFlow
 version really works with SQLAlchemy = 0.8 -- that was fixed only
 recently (https://review.openstack.org/#/c/62661/).
 
 What's wrong is to allow taskflow to be added to the global-requirements
 if it is in such a bad state, blocking such an important transition that
 has been needed for more than 6 months.
 
 Thomas
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-04 Thread Joshua Harlow
I was more of referring to general dependency issues, sqlalchemy hopefully 
never again but one never knows :P

Sent from my really tiny device...

 On Jan 4, 2014, at 8:40 AM, Thomas Goirand z...@debian.org wrote:
 
 On 01/05/2014 12:12 AM, Joshua Harlow wrote:
 it won't
 be the last time a library that is used in various projects
 causes dependency issues
 
 Please, tell me the opposite thing. Please tell me that this is the last
 time we're having a discussion about problems with SQLA 0.8. Please tell
 me that we're actually learning from these mistakes and that we'll see
 improvements...
 
 Thomas
 
  Db Csq
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devsaws

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-05 Thread Joshua Harlow
With regards to the futures module it should just work fine with the packaging 
of https://pypi.python.org/pypi/futures which is a backport of the 3.2 
concurrent futures package into 2.6,2.7, so that's the package there.

Feel free to bug me on irc if u want any other help with dependencies, the 
entrypoint failures shouldn't happen (possibly due to this). If u need help 
just bug harlowja on irc or jump in #openstack-state-management where 
others can help to...

Sent from my really tiny device...

 On Jan 4, 2014, at 7:26 AM, Thomas Goirand z...@debian.org wrote:
 
 Sean,
 
 Before everything, I'd like to thank you for insisting in making the
 transition to SQLA 0.8.x.
 
 Since it has been uploaded to Sid, this SQLA 0.7.99 has been without
 any doubt the biggest reoccurring pain in the but with the packaging of
 OpenStack. Without people like you, insisting again and again, I would
 have loose hope that progress could happen in OpenStack! So thanks
 again, Sean.
 
 On 01/03/2014 11:26 PM, Sean Dague wrote:
 Given that sqla 0.9 just came out, I wanted to explore, again, what the
 state of the world was with sqla 0.8 (especially given that Ubuntu and
 Red Hat are both shipping 0.8 in their OpenStack bundles) -
 https://review.openstack.org/#/c/64831/
 
 The answer is not great. But more importantly, the answer is actually
 worse than the last time we checked, which I think gives this some
 urgency to move forward.
 
 For me, it's been urgent since the 6th of July...
 
 SQLA 0.8.2 was uploaded to Sid on the 6th of July. Since then, I've been
 bugging everyone on this list about it, explaining that I have no choice
 but to have Debian packages to support it (since I upload in Sid, and
 Sid has SQL 0.8.x). It seems I still haven't been heard.
 
 Now, we're 6 months after that, and after the release of Havana which
 happened more than 3 months after this, and after everything was fixed
 in all core packages (the last one was heat, at the end of August). So,
 in January 2014, I'm still fixing manually most requirements.txt, which
 still advertize for support of only SQL 0.7.99. I currently have
 fixes-SQLAlchemy-requirement.patch in Cinder, Neutron, and Nova (for
 Havana), just to allow it and stop the software to break because it was
 decided it is the case, even though they perfectly work with SQLA 0.8.
 Some other project have SQLAlchemy=0.7.8,=0.7.99 in the
 requirements.txt, but do not break as badly as these 3 just because of
 the bad declaration that doesn't reflect the reality (that's the case
 for Heat Keystone and Glance, for example).
 
 Something is going extremely wrong here. And seeing the actual result of
 the SQLA transition, I am really leaning toward thinking we have a
 process problem. What I believe is wrong, is that instead of having
 project wide decisions imposing some constraints, we have leaf packages
 that do. Until absolutely all of OpenStack is ready and fixed, then
 there's no constraint applied. This is the thing that must change.
 
 It shouldn't be this way. It should be from top to bottom. While I do
 understand that we do need the gate to be able to continue working with
 all projects at any given time, we still must find a solution so that
 this kind of 6 months transition never happens again. This really goes
 against the culture that we have inside OpenStack, and our common belief
 that things must be able to move fast.
 
 On 01/04/2014 04:13 AM, Sean Dague wrote:
 Because of entry points any library that specifies any dependencies
 that OpenStack components specify, at incompatible levels, means that
 library effectively puts a hold on the rest of OpenStack and prevents
 it from being able to move forward.
 
 That's exactly the kind of *very bad* habits that needs to stop in
 OpenStack.
 
 On 01/04/2014 04:13 AM, Sean Dague wrote:
 The only other option is that libraries we own (stackforge / oslo),
 for condition to be included in global- requirements, *can never*
 specify a maximum version of any dependency (and I really mean
 never), and can never specify a minimum greater than current global
 requirements.
 
 PLEASE !!! Let's do this !!! :)
 
 On 01/04/2014 05:29 AM, Sean Dague wrote:
 It actually tells us that a human, somewhere, decided that their
 software did not work with some combination of other software
 
 Often it's even worse. Sometimes, a human decide that, just it case, the
 software will declare itself incompatible with some non-existent future
 version of another software, even if there's no way to know.
 
 We're even more into sci-fi when we see stuff like:
 
 pbr=0.5.21,1
 
 Monty, did you decide you would release 1.0 with lots of backward
 incompatibility? Has the topic been raised and I missed it??? I'm
 convinced this isn't the case (and let's pretend it isn't, just until
 the end of this message).
 
 So, how does one know that the thing he's using in PBR will be the thing
 that will cause trouble later on? For a version which hasn't been
 released 

Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-05 Thread Joshua Harlow
Agreed, we are going to expand it and work on figuring out how to test against 
multiple versions. It does work with 0.8 and it seems even like 0.9 works fine 
also. But all compatible also means I can't guarantee 0.10 (if it comes out) 
will work since afaik semver means sqlalchemy could still break things when 
it's   1.0. Anyone got a time machine I can use to check the future ;-)

It seems simple to have variations of venvs (or something similar) that 
taskflow tox.ini can have that specify the different 0.7, 0.8, 0.9, when 
sqlalchemy 1.0 comes out then this should become a nonissue (hopefully). I will 
bug the infra folks to see what can be done here (hopefully this is as simple 
as it sounds). 

Sent from my really tiny device...

 On Jan 5, 2014, at 8:22 AM, Clint Byrum cl...@fewbar.com wrote:
 
 I've skimmed the rest of the thread and not seen something mentioned
 that seems like it matters a lot. If I missed this suggestion buried
 deep in the ensuing discussion, I apologize for that.
 
 Since TaskFlow wants to be generally consumable and not only driven as
 an OpenStack component, it should not be following global requirements.
 It should actually expand its SQL Alchemy compatibility to all supported
 versions of SQLAlchemy. Ideally it would have a gate for each major
 version of SQL Alchemy that upstream supports.
 
 Otherwise it will never even be able to work with any project that
 doesn't share its SQL Alchemy version pin.
 
 Excerpts from Joshua Harlow's message of 2014-01-03 08:37:17 -0800:
 So taskflow was tested with the version of sqlalchemy that was available and 
 in the requirements at the time of its 0.1 release (taskflow syncs it's 
 requirements from the same global requirements). From what I remember this 
 is the same requirement that everyone else is bound to:
 
 SQLAlchemy=0.7.8,=0.7.99
 
 I can unpin it if this is desired (the openstack requirements repo has the 
 same version restriction). What would be recommended here? As more code 
 moves to pypi reusable libraries (oslo.db when it arrives comes to mind) I 
 think this will be hit more often. Let's come up with a good strategy to 
 follow.
 
 Thoughts??
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gantt] Looking for some answers...

2014-01-06 Thread Joshua Harlow
Was the history filtered out using something like 
http://git-scm.com/docs/git-filter-branch??

There seems to be a lot of commit history that isn't related to gantt files 
(baremetal…)

Was the plan to figure out which files to keep, then cleanup that commit 
history?

I wouldn't expect 
https://github.com/openstack/gantt/commit/ff3c9afa35e646b72e94ba0c020ee37544e0e9dc
 (and others) to showup if those histories were removed…

From: Boris Pavlovic bpavlo...@mirantis.commailto:bpavlo...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, January 6, 2014 at 12:08 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Gantt] Looking for some answers...

Russell,

It should be pretty easy to do this in gantt though.  Right now I would
probably do it against the current scheduler and then we'll port it
over.  I don't think we should do major work only in gantt until we're
ready to deprecate the current scheduler.


That make sense.

In couple of they we will try to make some real benchmarks using Rally, to 
ensure that no-db-scheduler works better then previous. So I hope we will get 
interest from community.


It's a new repo created with the history filtered out.  The history
was only maintained for code kept.  That seems pretty ideal to me.

Not sure that nova history is right history for scheduler (Why not Cinder for 
example? why exactly Nova?).
Imho Scheduler aaS for all projects and Nova Scheduler are different things.



Best regards,
Boris Pavlovic



On Mon, Jan 6, 2014 at 11:59 PM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:
On 01/06/2014 02:52 PM, Boris Pavlovic wrote:
 Vish,

 and as I understand it the hope will be to do the no-db-scheduler blueprint.
 There was quite a bit of debate on whether to do the no-db-scheduler stuff
 before or after the forklift and I think the consensus was to do the
 forklift
 first.

 Current Nova scheduler is so deeply bind to nova data models, that it is
 useless for every other project.

 So I don't think that forkit in such state of Nova Scheduler is useful
 for any other project.

It should be pretty easy to do this in gantt though.  Right now I would
probably do it against the current scheduler and then we'll port it
over.  I don't think we should do major work only in gantt until we're
ready to deprecate the current scheduler.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tommorow meeting at 2000 UTC

2014-01-08 Thread Joshua Harlow
Hi all,


The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow, 
2014-01-09!!!


As usual, everyone is welcome :-)


Link: https://wiki.openstack.org/wiki/Meetings/StateManagement

Taskflow: https://wiki.openstack.org/TaskFlow


## Agenda (30-60 mins):


- Discuss any action items from last meeting.

- Discuss 0.1.2 release and reviews (and sqlalchemy issue/adjustments/testing).

- Discuss 0.2.0 release and timeline and reviews.

- Discuss joining oslo? (yah, nah?).

- Discuss integration progress, help needed, other...

- Discuss ongoing checkpointing, and where checkpoints should live (flow, 
engine, elsewhere?).

- Discuss scoping review/idea.

- Discuss about any other potential new use-cases for said library.

- Discuss about any other ideas, questions and answers (and more!).


Any other topics are welcome :-)


See you all soon!


--


Joshua Harlow


It's openstack, relax... | harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-09 Thread Joshua Harlow
And my ring, my precious.

Count me in!

On 1/9/14, 6:06 AM, Julien Danjou jul...@danjou.info wrote:

On Thu, Jan 09 2014, Sean Dague wrote:

 I'm hopefully that if we can get everyone looking at this one a single
day,
 we can start to dislodge the log jam that exists.

I will help you bear this burden, Sean Dague, for as long as it is
yours to bear. You have my sword.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dependency analysis for graduating more oslo libraries

2014-01-15 Thread Joshua Harlow
Very nice write up +1

A question, many modules are pulling in oslo.log as seen from the dependency 
graph which itself then pulls in oslo.config. Is the plan to just have all 
these modules use the regular python logging and have oslo.log be a 
plugin/formatter/adapter to python logging?

Likely people that would use those libraries and that aren't tied to openstack 
want to provide/configure there own logging (which is normally done via python 
logging configuration files and such...)

A side question: what are all these libraries going to be named, seems like we 
need a lot of creative library names (not just prefix everything with oslo) :-)

Sent from my really tiny device...

On Jan 14, 2014, at 11:53 AM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:

I've spent some time over the past day or two looking at the dependencies 
between modules in the oslo incubator, trying to balance the desire to have a 
small number of libraries with themes that make sense and the need to eliminate 
circular dependencies.

The results of all of this are posted to 
https://wiki.openstack.org/wiki/Oslo/GraduationStatus, where I have grouped the 
modules together into proposed libraries.

The dependencies between the libraries can be seen in several graphs I've 
prepared and posted to https://wiki.openstack.org/wiki/Oslo/Dependencies

Once we settle on the list of libraries, the next step is to look at the lowest 
level libraries to see what steps need to be taken before they can be released. 
I plan to start on that after the icehouse-2 deadline.

Oslo team (and other interested parties), please take a look at the two wiki 
pages above and provide any feedback you have here on this ML thread.

Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tommorow meeting at 2000 UTC

2014-01-15 Thread Joshua Harlow
Hi all,


The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow, 
2014-01-16!!!


As usual, everyone is welcome :-)


Link: https://wiki.openstack.org/wiki/Meetings/StateManagement

Taskflow: https://wiki.openstack.org/TaskFlow


## Agenda (30-60 mins):


- Discuss any action items from last meeting.

- Integration help, reviews, comments, feedback (or other needed) for the 
various integrating projects

- Checkpoint work part #2 (questions, comments, feedback...)

- Scoping work part #2 (questions, comments, feedback...)

- Any other ongoing taskflow work (questions, comments, feedback...)

- Discuss about any other potential new use-cases for said library.

- Discuss about any other ideas, questions and answers (and more!).


Any other topics are welcome :-)


See you all soon!


--


Joshua Harlow


It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqla 0.8 ... and sqla 0.9

2014-01-16 Thread Joshua Harlow
Also with:

https://review.openstack.org/#/c/66051/

On 1/16/14, 10:40 AM, Jeremy Stanley fu...@yuggoth.org wrote:

On 2014-01-12 07:27:11 -0500 (-0500), Sean Dague wrote:
 With the taskflow update, the only thing between upping our sqla
 requirement to  0.8.99 is pbr's requirements integration test
 getting a work around for pip's behavior change (which will
 currently not install netifaces because it's not on pypi... also
 it's largely abandoned).

This should (hopefully) now be solved as of my patch to pypi-mirror
yesterday. If it's not, please let me know.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Proposal to add Changbin Liu to taskflow-core

2014-01-16 Thread Joshua Harlow
Greetings all stackers,

I propose that we add Changbin Liu[1] to the taskflow-core team[2].

Changbin has been actively contributing to taskflow for a while now, both in
helping develop code and helping with the review load. He has provided
quality reviews and is doing an awesome job with the various taskflow concepts
and helping make taskflow the best library it can be! He's even one of the
co-authors of a paper[3] which has helped inspire parts of taskflow.

Overall I think he would make a great addition to the core review team.

Please respond with +1/-1.

Thanks much!

[1] https://launchpad.net/~changbl
[2] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam
[3] http://www2.research.att.com/~kobus/docs/tropic.pdf
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tommorow meeting at 2000 UTC

2014-01-22 Thread Joshua Harlow
Hi all,


The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow, 
2014-01-23!!!


As usual, everyone is welcome :-)


Link: https://wiki.openstack.org/wiki/Meetings/StateManagement

Taskflow: https://wiki.openstack.org/TaskFlow


## Agenda (30-60 mins):


- Discuss any action items from last meeting.

- Continue talk about oslo membership 
[https://etherpad.openstack.org/p/oslo-taskflow].

- Continue discussion + explanation of/on checkpoints and scoping.

- Discuss when 0.2 can be released, what's missing, what needs review...

- Any tasklow integration help needed, reviews, requests for comments, anything 
we can do to help...

- Discuss about any other potential new use-cases for said library.

- Discuss about any other ideas, questions and answers (and more!).


Any other topics are welcome :-)


See you all soon!


--


Joshua Harlow


It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] memoizer aka cache

2014-01-23 Thread Joshua Harlow
So to me memoizing is typically a premature optimization in a lot of cases. And 
doing it incorrectly leads to overfilling the python processes memory (your 
global dict will have objects in it that can't be garbage collected, and with 
enough keys+values being stored will act just like a memory leak; basically it 
acts as a new GC root object in a way) or more cache invalidation 
races/inconsistencies than just recomputing the initial value...

Overall though there are a few caching libraries I've seen being used, any of 
which could be used for memoization.

- https://github.com/openstack/oslo-incubator/tree/master/openstack/common/cache
- 
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/memorycache.py
- dogpile cache @ https://pypi.python.org/pypi/dogpile.cache

I am personally weary of using them for memoization, what expensive method 
calls do u see the complexity of this being useful? I didn't think that many 
method calls being done in openstack warranted the complexity added by doing 
this (premature optimization is the root of all evil...). Do u have data 
showing where it would be applicable/beneficial?

Sent from my really tiny device...

 On Jan 23, 2014, at 8:19 AM, Shawn Hartsock harts...@acm.org wrote:
 
 I would like to have us adopt a memoizing caching library of some kind
 for use with OpenStack projects. I have no strong preference at this
 time and I would like suggestions on what to use.
 
 I have seen a number of patches where people have begun to implement
 their own caches in dictionaries. This typically confuses the code and
 mixes issues of correctness and performance in code.
 
 Here's an example:
 
 We start with:
 
 def my_thing_method(some_args):
# do expensive work
return value
 
 ... but a performance problem is detected... maybe the method is
 called 15 times in 10 seconds but then not again for 5 minutes and the
 return value can only logically change every minute or two... so we
 end up with ...
 
 _GLOBAL_THING_CACHE = {}
 
 def my_thing_method(some_args):
key = key_from(some_args)
 if key in _GLOBAL_THING_CACHE:
 return _GLOBAL_THING_CACHE[key]
 else:
  # do expensive work
  _GLOBAL_THING_CACHE[key] = value
  return value
 
 ... which is all well and good... but now as a maintenance programmer
 I need to comprehend the cache mechanism, when cached values are
 invalidated, and if I need to debug the do expensive work part I
 need to tease out some test that prevents the cache from being hit.
 Plus I've introduced a new global variable. We love globals right?
 
 I would like us to be able to say:
 
 @memoize(seconds=10)
 def my_thing_method(some_args):
# do expensive work
return value
 
 ... where we're clearly addressing the performance issue by
 introducing a cache and limiting it's possible impact to 10 seconds
 which allows for the idea that do expensive work has network calls
 to systems that may change state outside of this Python process.
 
 I'd like to see this done because I would like to have a place to
 point developers to during reviews... to say: use common/memoizer or
 use Bob's awesome memoizer because Bob has worked out all the cache
 problems already and you can just use it instead of worrying about
 introducing new bugs by building your own cache.
 
 Does this make sense? I'd love to contribute something... but I wanted
 to understand why this state of affairs has persisted for a number of
 years... is there something I'm missing?
 
 -- 
 # Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] memoizer aka cache

2014-01-23 Thread Joshua Harlow
Sure, no cancelling cases of conscious usage, but we need to be careful
here and make sure its really appropriate. Caching and invalidation
techniques are right up there in terms of problems that appear easy and
simple to initially do/use, but doing it correctly is really really hard
(especially at any type of scale).

-Josh

On 1/23/14, 1:35 PM, Renat Akhmerov rakhme...@mirantis.com wrote:


On 23 Jan 2014, at 08:41, Joshua Harlow harlo...@yahoo-inc.com wrote:

 So to me memoizing is typically a premature optimization in a lot of
cases. And doing it incorrectly leads to overfilling the python
processes memory (your global dict will have objects in it that can't be
garbage collected, and with enough keys+values being stored will act
just like a memory leak; basically it acts as a new GC root object in a
way) or more cache invalidation races/inconsistencies than just
recomputing the initial valueŠ

I agree with your concerns here. At the same time, I think this thinking
shouldn¹t cancel cases of conscious usage of caching technics. A decent
cache implementation would help to solve lots of performance problems
(which eventually becomes a concern for any project).

 Overall though there are a few caching libraries I've seen being used,
any of which could be used for memoization.
 
 - 
https://github.com/openstack/oslo-incubator/tree/master/openstack/common/
cache
 - 
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/
memorycache.py

I looked at the code. I have lots of question to the implementation (like
cache eviction policies, whether or not it works well with green threads,
but I think it¹s a subject for a separate discussion though). Could you
please share your experience of using it? Were there specific problems
that you could point to? May be they are already described somewhere?

 - dogpile cache @ https://pypi.python.org/pypi/dogpile.cache

This one looks really interesting in terms of claimed feature set. It
seems to be compatible with Python 2.7, not sure about 2.6. As above, it
would be cool you told about your experience with it.


 I am personally weary of using them for memoization, what expensive
method calls do u see the complexity of this being useful? I didn't
think that many method calls being done in openstack warranted the
complexity added by doing this (premature optimization is the root of
all evil...). Do u have data showing where it would be
applicable/beneficial?

I believe there¹s a great deal of use cases like caching db objects or
more generally caching any heavy objects involving interprocess
communication. For instance, API clients may be caching objects that are
known to be immutable on the server side.


 
 Sent from my really tiny device...
 
 On Jan 23, 2014, at 8:19 AM, Shawn Hartsock harts...@acm.org wrote:
 
 I would like to have us adopt a memoizing caching library of some kind
 for use with OpenStack projects. I have no strong preference at this
 time and I would like suggestions on what to use.
 
 I have seen a number of patches where people have begun to implement
 their own caches in dictionaries. This typically confuses the code and
 mixes issues of correctness and performance in code.
 
 Here's an example:
 
 We start with:
 
 def my_thing_method(some_args):
   # do expensive work
   return value
 
 ... but a performance problem is detected... maybe the method is
 called 15 times in 10 seconds but then not again for 5 minutes and the
 return value can only logically change every minute or two... so we
 end up with ...
 
 _GLOBAL_THING_CACHE = {}
 
 def my_thing_method(some_args):
   key = key_from(some_args)
if key in _GLOBAL_THING_CACHE:
return _GLOBAL_THING_CACHE[key]
else:
 # do expensive work
 _GLOBAL_THING_CACHE[key] = value
 return value
 
 ... which is all well and good... but now as a maintenance programmer
 I need to comprehend the cache mechanism, when cached values are
 invalidated, and if I need to debug the do expensive work part I
 need to tease out some test that prevents the cache from being hit.
 Plus I've introduced a new global variable. We love globals right?
 
 I would like us to be able to say:
 
 @memoize(seconds=10)
 def my_thing_method(some_args):
   # do expensive work
   return value
 
 ... where we're clearly addressing the performance issue by
 introducing a cache and limiting it's possible impact to 10 seconds
 which allows for the idea that do expensive work has network calls
 to systems that may change state outside of this Python process.
 
 I'd like to see this done because I would like to have a place to
 point developers to during reviews... to say: use common/memoizer or
 use Bob's awesome memoizer because Bob has worked out all the cache
 problems already and you can just use it instead of worrying about
 introducing new bugs by building your own cache.
 
 Does this make sense? I'd love to contribute something... but I wanted
 to understand why this state

Re: [openstack-dev] [oslo] memoizer aka cache

2014-01-23 Thread Joshua Harlow
Same here; I've done pretty big memcache (and similar technologies) scale 
caching  invalidations at Y! before so here to help…

From: Morgan Fainberg m...@metacloud.commailto:m...@metacloud.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, January 23, 2014 at 4:17 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [oslo] memoizer aka cache

Yes! There is a reason Keystone has a very small footprint of 
caching/invalidation done so far.  It really needs to be correct when it comes 
to proper invalidation logic.  I am happy to offer some help in determining 
logic for caching/invalidation with Dogpile.cache in mind as we get it into 
oslo and available for all to use.

--Morgan



On Thu, Jan 23, 2014 at 2:54 PM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
Sure, no cancelling cases of conscious usage, but we need to be careful
here and make sure its really appropriate. Caching and invalidation
techniques are right up there in terms of problems that appear easy and
simple to initially do/use, but doing it correctly is really really hard
(especially at any type of scale).

-Josh

On 1/23/14, 1:35 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:


On 23 Jan 2014, at 08:41, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

 So to me memoizing is typically a premature optimization in a lot of
cases. And doing it incorrectly leads to overfilling the python
processes memory (your global dict will have objects in it that can't be
garbage collected, and with enough keys+values being stored will act
just like a memory leak; basically it acts as a new GC root object in a
way) or more cache invalidation races/inconsistencies than just
recomputing the initial valueŠ

I agree with your concerns here. At the same time, I think this thinking
shouldn¹t cancel cases of conscious usage of caching technics. A decent
cache implementation would help to solve lots of performance problems
(which eventually becomes a concern for any project).

 Overall though there are a few caching libraries I've seen being used,
any of which could be used for memoization.

 -
https://github.com/openstack/oslo-incubator/tree/master/openstack/common/
cache
 -
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/
memorycache.py

I looked at the code. I have lots of question to the implementation (like
cache eviction policies, whether or not it works well with green threads,
but I think it¹s a subject for a separate discussion though). Could you
please share your experience of using it? Were there specific problems
that you could point to? May be they are already described somewhere?

 - dogpile cache @ https://pypi.python.org/pypi/dogpile.cache

This one looks really interesting in terms of claimed feature set. It
seems to be compatible with Python 2.7, not sure about 2.6. As above, it
would be cool you told about your experience with it.


 I am personally weary of using them for memoization, what expensive
method calls do u see the complexity of this being useful? I didn't
think that many method calls being done in openstack warranted the
complexity added by doing this (premature optimization is the root of
all evil...). Do u have data showing where it would be
applicable/beneficial?

I believe there¹s a great deal of use cases like caching db objects or
more generally caching any heavy objects involving interprocess
communication. For instance, API clients may be caching objects that are
known to be immutable on the server side.



 Sent from my really tiny device...

 On Jan 23, 2014, at 8:19 AM, Shawn Hartsock 
 harts...@acm.orgmailto:harts...@acm.org wrote:

 I would like to have us adopt a memoizing caching library of some kind
 for use with OpenStack projects. I have no strong preference at this
 time and I would like suggestions on what to use.

 I have seen a number of patches where people have begun to implement
 their own caches in dictionaries. This typically confuses the code and
 mixes issues of correctness and performance in code.

 Here's an example:

 We start with:

 def my_thing_method(some_args):
   # do expensive work
   return value

 ... but a performance problem is detected... maybe the method is
 called 15 times in 10 seconds but then not again for 5 minutes and the
 return value can only logically change every minute or two... so we
 end up with ...

 _GLOBAL_THING_CACHE = {}

 def my_thing_method(some_args):
   key = key_from(some_args)
if key in _GLOBAL_THING_CACHE:
return _GLOBAL_THING_CACHE[key]
else:
 # do expensive work
 _GLOBAL_THING_CACHE[key] = value
 return value

 ... which is all well and good... but now as a maintenance programmer
 I

Re: [openstack-dev] [Ironic] File Injection (and the lack thereof)

2014-01-24 Thread Joshua Harlow
Cloud-init 0.7.5 (not yet released) will have the ability to read from an
ec2-metadata server using SSL.

In a recent change I did we now use requests which correctly does SSL for
the ec2-metadata/ec2-userdata reading.

- http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/910

For ssl-certs that it will use by default (if not provided) will be looked
for in the following locations.

- /var/lib/cloud/data/ssl
   - cert.pem
   - key
- /var/lib/cloud/instance/data/ssl
   - cert.pem
   - key
- ... Other custom paths (typically datasource dependent)

So I think in 0.7.5 for cloud-init this support will be improved and as
long as there is a supporting ssl ec2 metadata endpoint then this should
all work out fine...

-Josh

On 1/24/14, 11:35 AM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Devananda van der Veen's message of 2014-01-24 06:15:12
-0800:
 In going through the bug list, I spotted this one and would like to
discuss
 it:
 
 can't disable file injection for bare metal
 https://bugs.launchpad.net/ironic/+bug/1178103
 
 There's a #TODO in Ironic's PXE driver to *add* support for file
injection,
 but I don't think we should do that. For the various reasons that Robert
 raised a while ago (
 
http://lists.openstack.org/pipermail/openstack-dev/2013-May/008728.html),
 file injection for Ironic instances is neither scalable nor secure. I'd
 just as soon leave support for it completely out.
 
 However, Michael raised an interesting counter-point (
 http://lists.openstack.org/pipermail/openstack-dev/2013-May/008735.html)
 that some deployments may not be able to use cloud-init due to their
 security policy.
 

I'm not sure how careful we are about security while copying the image.
Given that we currently just use tftp and iSCSI, it seems like putting
another requirement on that for security (user-data, network config,
etc) is like pushing the throttle forward on the Titanic.

I'd much rather see cloud-init/ec2-metadata made to work better than
see us over complicate an already haphazard process with per-node
customization. Perhaps We could make EC2 metadata work with SSL and bake
CA certs into the images?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] File Injection (and the lack thereof)

2014-01-24 Thread Joshua Harlow
Also just to note; file-injection seems unneeded when cloud-init can use
this:

http://cloudinit.readthedocs.org/en/latest/topics/examples.html#writing-out
-arbitrary-files

That I believe is in most modern versions of cloud-init (forgot when I
implemented that).

Just FYI :)

-Josh

On 1/24/14, 3:31 PM, Robert Collins robe...@robertcollins.net wrote:

On 25 January 2014 03:15, Devananda van der Veen
devananda@gmail.com wrote:
 In going through the bug list, I spotted this one and would like to
discuss
 it:

 can't disable file injection for bare metal
 https://bugs.launchpad.net/ironic/+bug/1178103

 There's a #TODO in Ironic's PXE driver to *add* support for file
injection,
 but I don't think we should do that. For the various reasons that Robert
 raised a while ago
 
(http://lists.openstack.org/pipermail/openstack-dev/2013-May/008728.html)
,
 file injection for Ironic instances is neither scalable nor secure. I'd
just
 as soon leave support for it completely out.

 However, Michael raised an interesting counter-point
 
(http://lists.openstack.org/pipermail/openstack-dev/2013-May/008735.html)
 that some deployments may not be able to use cloud-init due to their
 security policy.

If they can't use cloud-init, they probably can't PXE deploy either,
because today, both have the same security characteristics.

 As we don't have support for config drives in Ironic yet, and we won't
until
 there is a way to control either virtual media or network volumes on
ironic
 nodes. So, I'd like to ask -- do folks still feel that we need to
support
 file injection?

Unless the network volume is out of band secured/verifiable, it will
be equivalent to cloud-init and thus fail this security policy.

I would use SSL metadata - yay joshuah - and consider that sufficient
until we have a specific security policy in front of us that we can
review, and see *all* the wholes that we'll have, rather than
cherrypicking issues: what passes such a policy for nova-KVM is likely
not sufficient for ironic.

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [TripleO] mid-cycle meetup?

2014-01-24 Thread Joshua Harlow
Yahoo! is right in the middle of sunnyvale; and probably has enough free space 
to handle all u folks (some rooms are quite big really).

I can talk with some folks here about hosting all of this here if that’s 
desired (depending on how big).

-Josh

From: Roman Alekseenkov 
ralekseen...@mirantis.commailto:ralekseen...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, January 24, 2014 at 5:51 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Fwd: [TripleO] mid-cycle meetup?

Rob,

We would definitely come to TripleO meetup if it gets scheduled. That would be 
3-4 people from Mirantis.

We were also thinking about gathering a larger crowd at some point, with people 
from different deployment initiatives - TripleO, Crowbar, Fuel, Compass, etc to 
see how we can better align and reduce duplication of efforts.

Thanks,
Roman


On Fri, Jan 24, 2014 at 2:03 PM, Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net wrote:
This was meant to go to -dev, not -operators. Doh.


-- Forwarded message --
From: Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net
Date: 24 January 2014 08:47
Subject: [TripleO] mid-cycle meetup?
To: 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org


Hi, sorry for proposing this at *cough* the mid-way point [christmas
shutdown got in the way of internal acks...], but who would come if
there was a mid-cycle meetup? I'm thinking the HP sunnyvale office as
a venue.

-Rob

--
Robert Collins rbtcoll...@hp.commailto:rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [TripleO] mid-cycle meetup?

2014-01-25 Thread Joshua Harlow
Sounds fine with me,

If needed let me know and I'll engage the right people here to help.

Sent from my really tiny device...

On Jan 25, 2014, at 2:27 AM, Chris Jones 
c...@tenshu.netmailto:c...@tenshu.net wrote:

Hey

I suspect Rob was intending to host at HP's facility in Sunnyvale, but it's 
good to know we have options if we overflow available meeting rooms or suchlike 
:)

Cheers,
--
Chris Jones

On 25 Jan 2014, at 03:22, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

Yahoo! is right in the middle of sunnyvale; and probably has enough free space 
to handle all u folks (some rooms are quite big really).

I can talk with some folks here about hosting all of this here if that’s 
desired (depending on how big).

-Josh

From: Roman Alekseenkov 
ralekseen...@mirantis.commailto:ralekseen...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, January 24, 2014 at 5:51 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Fwd: [TripleO] mid-cycle meetup?

Rob,

We would definitely come to TripleO meetup if it gets scheduled. That would be 
3-4 people from Mirantis.

We were also thinking about gathering a larger crowd at some point, with people 
from different deployment initiatives - TripleO, Crowbar, Fuel, Compass, etc to 
see how we can better align and reduce duplication of efforts.

Thanks,
Roman


On Fri, Jan 24, 2014 at 2:03 PM, Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net wrote:
This was meant to go to -dev, not -operators. Doh.


-- Forwarded message --
From: Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net
Date: 24 January 2014 08:47
Subject: [TripleO] mid-cycle meetup?
To: 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org


Hi, sorry for proposing this at *cough* the mid-way point [christmas
shutdown got in the way of internal acks...], but who would come if
there was a mid-cycle meetup? I'm thinking the HP sunnyvale office as
a venue.

-Rob

--
Robert Collins rbtcoll...@hp.commailto:rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Node groups and multi-node operations

2014-01-26 Thread Joshua Harlow
Doesn't nova already have logic for creating N virtual machines (similar to a 
group) in the same request? I thought it did (maybe it doesn't anymore in the 
v3 API), creating N bare metal machines seems like it would comply to that api?

Sent from my really tiny device...

 On Jan 22, 2014, at 4:50 PM, Devananda van der Veen 
 devananda@gmail.com wrote:
 
 So, a conversation came again up today around whether or not Ironic will, in 
 the future, support operations on groups of nodes. Some folks have expressed 
 a desire for Ironic to expose operations on groups of nodes; others want 
 Ironic to host the hardware-grouping data so that eg. Heat and Tuskar can 
 make more intelligent group-aware decisions or represent the groups in a UI. 
 Neither of these have an implementation in Ironic today... and we still need 
 to implement a host of other things before we start on this. FWIW, this 
 discussion is meant to stimulate thinking ahead to things we might address in 
 Juno, and aligning development along the way.
 
 There's also some refactoring / code cleanup which is going on and worth 
 mentioning because it touches the part of the code which this discussion 
 impacts. For our developers, here is additional context:
 * our TaskManager class supports locking 1 node atomically, but both the 
 driver API and our REST API only support operating on one node at a time. 
 AFAIK, no where in the code do we actually pass a group of nodes.
 * for historical reasons, our driver API requires both a TaskManager and a 
 Node object be passed to all methods. However, the TaskManager object 
 contains a reference to the Node(s) which it has acquired, so the node 
 parameter is redundant.
 * we've discussed cleaning this up, but I'd like to avoid refactoring the 
 same interfaces again when we go to add group-awareness.
 
 
 I'll try to summarize the different axis-of-concern around which the 
 discussion of node groups seem to converge...
 
 1: physical vs. logical grouping
 - Some hardware is logically, but not strictly physically, grouped. Eg, 1U 
 servers in the same rack. There is some grouping, such as failure domain, but 
 operations on discrete nodes are discreet. This grouping should be modeled 
 somewhere, and some times a user may wish to perform an operation on that 
 group. Is a higher layer (tuskar, heat, etc) sufficient? I think so.
 - Some hardware _is_ physically grouped. Eg, high-density cartridges which 
 share firmware state or a single management end-point, but are otherwise 
 discrete computing devices. This grouping must be modeled somewhere, and 
 certain operations can not be performed on one member without affecting all 
 members. Things will break if each node is treated independently.
 
 2: performance optimization
 - Some operations may be optimized if there is an awareness of concurrent 
 identical operations. Eg, deploy the same image to lots of nodes using 
 multicast or bittorrent. If Heat were to inform Ironic that this deploy is 
 part of a group, the optimization would be deterministic. If Heat does not 
 inform Ironic of this grouping, but Ironic infers it (eg, from timing of 
 requests for similar actions) then optimization is possible but 
 non-deterministic, and may be much harder to reason about or debug.
 
 3: APIs
 - Higher layers of OpenStack (eg, Heat) are expected to orchestrate discrete 
 resource units into a larger group operation. This is where the grouping 
 happens today, but already results in inefficiencies when performing 
 identical operations at scale. Ironic may be able to get around this by 
 coalescing adjacent requests for the same operation, but this would be 
 non-deterministic.
 - Moving group-awareness or group-operations into the lower layers (eg, 
 Ironic) looks like it will require non-trivial changes to Heat and Nova, and, 
 in my opinion, violates a layer-constraint that I would like to maintain. On 
 the other hand, we could avoid the challenges around coalescing. This might 
 be necessary to support physically-grouped hardware anyway, too.
 
 
 If Ironic coalesces requests, it could be done in either the ConductorManager 
 layer or in the drivers themselves. The difference would be whether our 
 internal driver API accepts one node or a set of nodes for each operation. 
 It'll also impact our locking model. Both of these are implementation details 
 that wouldn't affect other projects, but would affect our driver developers.
 
 Also, until Ironic models physically-grouped hardware relationships in some 
 internal way, we're going to have difficulty supporting that class of 
 hardware. Is that OK? What is the impact of not supporting such hardware? It 
 seems, at least today, to be pretty minimal.
 
 
 Discussion is welcome.
 
 -Devananda
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Node groups and multi-node operations

2014-01-26 Thread Joshua Harlow
Thanks, guess this is entering the realm of scheduling  group scheduling and 
how just the right level of information is needed to do efficient group 
scheduling in nova/ironic vs the new/upcoming gantt service. 

To me splitting it into N single requests isn't group scheduling but is just 
more of a batch processor to make things more parallel. To me it seems like 
gantt (or heat) or something else should know enough about the topology to 
identify where to schedule a request (or a group request) and then gantt/heat 
should pass enough location information to nova or ironic to let it know what 
was selected. Then nova or ironic can go about the dirty work of ensuring the 
instances were created reliably Of course it gets complicated when multiple 
resources are involved; but nobody said it was going to be easy ;)

Sent from my really tiny device...

 On Jan 26, 2014, at 12:25 PM, Robert Collins robe...@robertcollins.net 
 wrote:
 
 On 27 January 2014 08:04, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Doesn't nova already have logic for creating N virtual machines (similar to 
 a group) in the same request? I thought it did (maybe it doesn't anymore in 
 the v3 API), creating N bare metal machines seems like it would comply to 
 that api?
 
 It does, but it splits it into N concurrent single server requests so
 that they get spread out amongst different nova-compute processes -
 getting you parallelisation: and the code for single server requests
 is sufficiently complex that having a rarely used path that preserves
 the batch seems undesirable to me.
 
 Besides which, as Ironic also dispatches work to many different
 backend workers, sending a batch to Ironic would just result in it
 having to split it out as well.
 
 -Rob
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mistral + taskflow mini-meetup

2014-01-27 Thread Joshua Harlow
Hi all,

In order to encourage further discussion off IRC and more in public I'd like to 
share a etherpad that was worked on during a 'meetup' with some of the mistral 
folks and me.

https://etherpad.openstack.org/p/taskflow-mistral-jan-meetup

It was more of a (mini) in-person meetup but I thought I'd be good to gather 
some feedback there and let the more general audience see this and ask any 
questions/feedback/other...

Some of the key distinctions between taskflow/mistral we talked about and as 
well other various DSL aspects and some possible action items.

Feel free to ask questions,

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread Joshua Harlow
+1 I've never understood this either personally.

From what I know most all (correct me if I am wrong) open source projects
don't translate log messages; so it seems odd to be the special snowflake
project/s.

Do people find this type of translation useful?

It'd be nice to know how many people really do so the benefit/drawbacks of
doing it can be evaluated by real usage data.

-Josh

On 1/27/14, 9:58 AM, Daniel P. Berrange berra...@redhat.com wrote:

On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
 We have a blueprint open for separating translated log messages into
 different domains so the translation team can prioritize them
differently
 (focusing on errors and warnings before debug messages, for example)
[1].

 Feedback?

 [1]
 
https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-doma
in

IMHO we've created ourselves a problem we don't need to have in the first
place by trying to translate every single log message. It causes pain for
developers  vendors because debug logs from users can in any language
which the person receiving will often not be able to understand. It
creates
pain for translators by giving them an insane amount of work todo, which
never ends since log message text is changed so often. Now we're creating
yet more pain  complexity by trying to produce multiple log domains to
solve
a problem of havin some many msgs to translate. I accept that some people
will
like translated log messages, but I don't think this is a net win when you
look at the overall burden they're imposing.

Shouldn't we just say no to this burden and remove translation of all log
messages, except for those at WARN/ERROR level which is likely to be seen
by administrators in a day-to-day basis. There's few enough of those that
we wouldn't need these extra translation domains. IMHO translating stuff
at DEBUG/INFO level is a waste of limited time  resources.

Regards,
Daniel
-- 
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o-
http://virt-manager.org :|
|: http://autobuild.org   -o-
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-
http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mistral + taskflow mini-meetup

2014-01-27 Thread Joshua Harlow
Just a note also:

Taskflow's in a way is event-driven also, a workflow goes through various 
events and those events cause further actions (state-transitions, 
notifications, forward-progress).

I fully expect the https://review.openstack.org/#/c/63155 (yes not 
oslo.messaging, but someday when that library exists) to be more in your idea 
of event-driven.

To me u can model an event-driven system using an executor type (but not so 
much the other-way around); but perhaps it is possible and I can't think of it 
right now.

In fact if u look at what guido is doing with tulip [1] u can see a way to 
connect events to executors/futures to events (slightly similar to taskflows 
engine+futures).

I'd really like mistral to get back on using taskflow and helping converge 
instead of diverge, so lets make it happen :-)

-Josh

[1] http://www.python.org/dev/peps/pep-3156/

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, January 27, 2014 at 12:31 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Mistral + taskflow mini-meetup

Josh, thanks for sharing this with the community. Just a couple of words as an 
addition to that..

The driver for this conversation is that TaskFlow library and Mistral service 
in many ways do similar things: task processing combined somehow (flow or 
workflow). However, there’s a number of differences in approaches that the two 
technologies follow. Initially, when Mistral’s development phase started about 
a couple of months ago the team was willing to use TaskFlow at implementation 
level. Basically, we can potentially represent Mistral tasks as TaskFlow tasks 
and use TaskFlow API to run them. One of the problems though is that TaskFlow 
tasks are basically python methods and hence run synchronously (once we get out 
of the method the task is considered finished) whereas Mistral is primarily 
designed to run asynchronous tasks (send a signal to an external system and 
start waiting for a result which may arrive minutes or hours later. Mistral is 
more like event-driven system versus traditional executor architecture. So now 
Mistral PoC is not using TaskFlow but moving forward we we’d like to try to 
marry these two technologies to be more aligned in terms of APIs and feature 
sets.


Renat Akhmerov
@ Mirantis Inc.

On 27 Jan 2014, at 13:21, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

Hi all,

In order to encourage further discussion off IRC and more in public I'd like to 
share a etherpad that was worked on during a 'meetup' with some of the mistral 
folks and me.

https://etherpad.openstack.org/p/taskflow-mistral-jan-meetup

It was more of a (mini) in-person meetup but I thought I'd be good to gather 
some feedback there and let the more general audience see this and ask any 
questions/feedback/other...

Some of the key distinctions between taskflow/mistral we talked about and as 
well other various DSL aspects and some possible action items.

Feel free to ask questions,

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hacking repair scripts

2014-01-27 Thread Joshua Harlow
Hi all,

I have had in my ~/bin for a while a little script that I finally got around to 
tuning up and I thought others might be interested in it/find it useful.

The concept is similar to https://pypi.python.org/pypi/autopep8 but does a 
really simple action to start.

As many of u know the import order is a hacking rule, but its not sometimes 
clear how to fix the order to be correct; so the tool I fixed/cleaned up 
reorganizes the imports to be in the right order.

I initially hooked it into the hacking codebase @ 
https://review.openstack.org/#/c/68988

It could be something that could be built on to automate 'repairing' many of 
the hacking issues that are encountered (ones that are simple are the easiest, 
like imports).

Anyways,

Thought people might find it useful and it could become a part of automatic 
repairing/style adjustments in the future (similar to I guess what go has in 
`gofmt`).

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Joshua Harlow
I think depending on use-case it can be accomplished.

The steps we have been thinking at Y!

1. Take offline APIs  nova-compute (so new/existing VMs can't be
scheduled/modified) -- existing running VMs will not be affected.
2. Save/dump nova database.
3. Translate nova database entries into corresponding neutron database
entries.
4. Remove/adjust the *right* entries of the nova database.
5. Startup neutron+agents with database that it believes it was running
with the whole time.
6. Restart nova-api  nova-compute (it will now never know that it was
previously using nova-network).
7. Profit!

Depending on the use-case and how complex your impl. is something like the
above should work. Of course the devil is in the details ;)

On 1/29/14, 11:50 AM, Tim Bell tim.b...@cern.ch wrote:


I'm not seeing a path to migrate 1,000s of production VMs from nova
network to Neutron.

Can someone describe how this can be done without downtime for the VMs ?

Can we build an approach for the cases below in a single OpenStack
production cloud:

1. Existing VMs to carry on running without downtime (and no new features)
2. Existing VMs to choose a window for reconfiguration for Neutron (to
get the new function)
3. New VMs to take advantage of Neutron features such as LBaaS
- 
Tim

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 29 January 2014 19:04
 To: Daniel P. Berrange
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse
and beyond
 
 On 01/29/2014 12:45 PM, Daniel P. Berrange wrote:
  I was thinking of an upgrade path more akin to what users got when we
  removed the nova volume driver, in favour of cinder.
 
https://wiki.openstack.org/wiki/MigrateToCinder
 
  ie no guest visible downtime / interuption of service, nor running of
  multiple Nova instances in parallel.
 
 Yeah, I'd love to see something like that.  I would really like to see
more effort in this area.  I honestly haven't been thinking about it
 much in a while personally, because the rest of the make it work gaps
have still been a work in progress.
 
 There's a bit of a bigger set of questions here, too ...
 
 Should nova-network *ever* go away?  Or will there always just be a
choice between the basic/legacy nova-network option, and the
 new fancy SDN-enabling Neutron option?  Is the Neutron team's time
better spent on OpenDaylight integration than the existing
 open source plugins?
 
 Depending on the answers to those questions, the non-visible
no-downtime migration path may be a less important issue.
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tommorow meeting at 2000 UTC

2014-01-29 Thread Joshua Harlow
Hi all,


The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow, 
2014-01-30!!!


As usual, everyone is welcome :-)


Link: https://wiki.openstack.org/wiki/Meetings/StateManagement

Taskflow: https://wiki.openstack.org/TaskFlow


## Agenda (30-60 mins):


- Discuss any action items from last meeting.

- 0.2 release timeline  features there-in.

- Joining oslo continued discussion (part three).

- Any reviews that need reviewing/looking at/feedback.

- Discuss about any other potential new use-cases for said library.

- Discuss about any other ideas, questions and answers (and more!).


Any other topics are welcome :-)


See you all soon!


--


Joshua Harlow


It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder + taskflow

2014-02-03 Thread Joshua Harlow
Hi all,

After talking with john g. about taskflow in cinder and seeing more and
more reviews showing up I wanted to start a thread to gather all our
lessons learned and how we can improve a little before continuing to add
too many more refactoring and more reviews (making sure everyone is
understands the larger goal and larger picture of switching pieces of
cinder - piece by piece - to taskflow).

Just to catch everyone up.

Taskflow started integrating with cinder in havana and there has been some
continued work around these changes:

- https://review.openstack.org/#/c/58724/
- https://review.openstack.org/#/c/66283/
- https://review.openstack.org/#/c/62671/

There has also been a few other pieces of work going in (forgive me if I
missed any...):

- https://review.openstack.org/#/c/64469/
- https://review.openstack.org/#/c/69329/
- https://review.openstack.org/#/c/64026/

I think now would be a good time (and seems like a good idea) to create
the discussion to learn how people are using taskflow, common patterns
people like, don't like, common refactoring idioms that are occurring and
most importantly to make sure that we refactor with a purpose and not just
refactor for refactoring sake (which can be harmful if not done
correctly). So to get a kind of forward and unified momentum behind
further adjustments I'd just like to make sure we are all aligned and
understood on the benefits and yes even the drawbacks that these
refactorings bring.

So here is my little list of benefits:

- Objects that do just one thing (a common pattern I am seeing is
determining what the one thing is, without making it to granular that its
hard to read).
- Combining these objects together in a well-defined way (once again it
has to be carefully done to not create to much granularness).
- Ability to test these tasks and flows via mocking (something that is
harder when its not split up like this).
- Features that aren't currently used such as state-persistence (but will
help cinder become more crash-resistant in the future).
  - This one will itself need to be understood before doing [I started
etherpad @ https://etherpad.openstack.org/p/cinder-taskflow-persistence
for this].

List of drawbacks (or potential drawbacks):

- Having a understanding of what taskflow is doing adds on a new layer of
things to know (hopefully docs help in this area, that was there goal).
- Selecting to granular of a task or flow; makes it harder to
follow/understand the task/flow logic.
- Focuses on the long-term (not necessarily short-term) state-management
concerns (can't refactor rome in a day).
- Taskflow is being developed at the same time cinder is.

I'd be very interested in hearing about others experiences and to make
sure that we discuss the changes (in a well documented and agreed on
approach) before jumping to much into the 'deep end' with a large amount
of refactoring (aka, refactoring with a purpose). Let's make this thread
as useful as we can and try to see how we can unify all these refactorings
behind a common (and documented  agreed-on) purpose.

A thought, for the reviews above, I think it would be very useful to
etherpad/writeup more in the blueprint what the 'refactoring with a
purpose' is so that its more known to future readers (and for active
reviewers), hopefully this email can start to help clarify that purpose so
that things proceed as smoothly as possible.

-Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder + taskflow

2014-02-03 Thread Joshua Harlow
Thanks john for the input.

Hopefully we can help focus some of the refactoring on solving the
state-management problem very soon.

For the mocking case, is there any active work being done here?

As for the state-management and persistence, I think that the goal of both
of these will be reached and it is a good idea to focus around these
problems and I am all in to figuring out those solutions, although my
guess is that both of these will be long-term no matter what. Refactoring
cinder from what it is to what it could/can be will take time (and should
take time, to be careful and meticulous) and hopefully we can ensure that
focus is retained. Since in the end it benefits everyone :)

Lets reform around that state-management issue (which involved a
state-machine concept?). To me the current work/refactoring helps
establish tasks objects that can be plugged into this machine (which is
part of the problem, without task objects its hard to create a
state-machine concept around code that is dispersed). To me that¹s where
the current refactoring work helps (in identifying those tasks and
adjusting code to be closer to smaller units that do a single task), later
when a state-machine concept (or something similar) comes along with will
be using these tasks (or variations of) to automate transitions based on
given events (the flow concept that exists in taskflow is similar to this
already).

The questions I had (or can currently think of) with the state-machine
idea (versus just defined flows of tasks) are:

1. What are the events that trigger a state-machine to transition?
  - Typically some type of event causes a machine to transition to a new
state (after performing some kind of action). Who initiates that
transition.
2. What are the events that will cause this triggering? They are likely
related directly to API requests (but may not be).
3. If a state-machine ends up being created, how does it interact with
other state-machines that are also running at the same time (does it?)
  - This is a bigger question, and involves how one state-machine could be
modifying a resource, while another one could be too (this is where u want
only one state-machine to be modifying a resource at a time). This would
solve some of the races that are currently existent (while introducing the
complexity of distributed locking).
  - It is of my opnion that the same problem in #3 happens when using task
and flows that also affect simulatenous resources; so its not a unique
problem that is directly connected to flows. Part of this I am hoping the
tooz project[1] can help with, since last time I checked they want to help
make a nice API around distributed locking backends (among other similar
APIs).

[1] https://github.com/stackforge/tooz#tooz

-Original Message-
From: John Griffith john.griff...@solidfire.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Monday, February 3, 2014 at 1:16 PM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Cinder + taskflow

On Mon, Feb 3, 2014 at 1:53 PM, Joshua Harlow harlo...@yahoo-inc.com
wrote:
 Hi all,

 After talking with john g. about taskflow in cinder and seeing more and
 more reviews showing up I wanted to start a thread to gather all our
 lessons learned and how we can improve a little before continuing to add
 too many more refactoring and more reviews (making sure everyone is
 understands the larger goal and larger picture of switching pieces of
 cinder - piece by piece - to taskflow).

 Just to catch everyone up.

 Taskflow started integrating with cinder in havana and there has been
some
 continued work around these changes:

 - https://review.openstack.org/#/c/58724/
 - https://review.openstack.org/#/c/66283/
 - https://review.openstack.org/#/c/62671/

 There has also been a few other pieces of work going in (forgive me if I
 missed any...):

 - https://review.openstack.org/#/c/64469/
 - https://review.openstack.org/#/c/69329/
 - https://review.openstack.org/#/c/64026/

 I think now would be a good time (and seems like a good idea) to create
 the discussion to learn how people are using taskflow, common patterns
 people like, don't like, common refactoring idioms that are occurring
and
 most importantly to make sure that we refactor with a purpose and not
just
 refactor for refactoring sake (which can be harmful if not done
 correctly). So to get a kind of forward and unified momentum behind
 further adjustments I'd just like to make sure we are all aligned and
 understood on the benefits and yes even the drawbacks that these
 refactorings bring.

 So here is my little list of benefits:

 - Objects that do just one thing (a common pattern I am seeing is
 determining what the one thing is, without making it to granular that
its
 hard to read).
 - Combining these objects together in a well-defined way (once again it
 has to be carefully

Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-04 Thread Joshua Harlow
A follow up kind of question,

Why not just create a eventlet micro version that underneath uses asyncio? 
Then code isn't riddled with yield from or Return() or similar. It seems 
possible (maybe?) to replace the eventlet hub with one that uses asyncio? This 
is then a nice change for all those who are using eventlet.

Btw great work with trollis!

Sent from my really tiny device...

 On Feb 4, 2014, at 5:45 AM, victor stinner victor.stin...@enovance.com 
 wrote:
 
 Hi,
 
 I would like to replace eventlet with asyncio in OpenStack for the 
 asynchronous programming. The new asyncio module has a better design and is 
 less magical. It is now part of python 3.4 arguably becoming the de-facto 
 standard for asynchronous programming in Python world.
 
 
 The asyncio module is a new module of Python 3.4 written by Guido van Rossum 
 as an abstraction of existing event loop (Twisted, Tornado, eventlet, etc.). 
 In fact, it's more than an abstraction: it has its own event loop and can be 
 used alone. For the background, read the PEP:
 
   http://www.python.org/dev/peps/pep-3156/
   Asynchronous IO Support Rebooted: the asyncio Module
 
 For more information on asyncio, see its documentation:
 
   http://docs.python.org/dev/library/asyncio.html
 
 The asyncio module is also known as Tulip which is third-party project 
 written for Python 3.3. Tulip was written first and then it was integrated in 
 Python 3.4.
 
   http://code.google.com/p/tulip/
 
 The main difference between eventlet and asyncio is that context switching 
 between two concurrent tasks is explicit. When a task blocks  (ex: wait for 
 an event), it should use the yield from syntax which will switch to the 
 next task: it's similar to greenlet.switch in eventlet. So it becomes 
 obvious which parts of the code do switch and which don't. With eventlet, you 
 have to read the source code of a function before calling it to check if it 
 may call greenlet.switch() or not, and so debugging is more difficult. Or 
 worse, the function may switch in a new version of a module, you won't notice 
 the change.
 
 The asyncio module handles various kind of events: sockets, pipes, 
 subprocesses, UNIX signals, etc. All these things are handled in a single 
 event loop. You can uses an exector to run a blocking task in a pool of 
 thread. There is a low-level API using transports and protocols, similar to 
 Twisted transports and protocols. But there is also a high-level API using 
 streams, which gives a syntax close to eventlet (except that you have to add 
 yield from). See an example to send an HTTP request and print received HTTP 
 headers, the yield from reader.readline() instruction blocks until it 
 gets a full line:
 
   http://docs.python.org/dev/library/asyncio-stream.html#example
 
 
 The problem is that the asyncio module was written for Python 3.3, whereas 
 OpenStack is not fully Python 3 compatible (yet). To easy the transition I 
 have ported asyncio on Python 2, it's the new Trollis project which supports 
 Python 2.6-3.4:
 
   https://bitbucket.org/enovance/trollius
 
 The Trollius API is the same than asyncio, the main difference is the syntax 
 in coroutines: yield from task must be written yield task, and return 
 value must be written raise Return(value).
 
 
 
 The first step to move from eventlet to asyncio is a new executor using 
 Trollius in Olso Messaging:
 
  https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio
  https://review.openstack.org/#/c/70948
 
 
 The transition from eventlet to asyncio can be done step by step. Thanks to 
 the greenio project, asyncio can reuse the greenlet event loop (and so run in 
 the main thread). So asyncio and eventlet become compatible. While asyncio 
 can also run its own event loop in a separated thread.
 
 If eventlet is completely replaced with asyncio in a project, greenio can be 
 dropped, and asyncio event loop can be run its own event loop in the main 
 thread.
 
 When OpenStack will be compatible with Python 3.3, it will be possible to use 
 the builtin asyncio module of Python 3.4 directly instead of Trollus. Since 
 yield from is incompatible with Python 2, some parts of the code may need 
 to have two versions (one for Python 2, one for Python 3) if we want to use 
 the Python 3 flavor of asyncio... or Python 2 support might be simplify 
 dropped.
 
 Victor
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-04 Thread Joshua Harlow
I was just thinking that openstack would be better to slowly get off
eventlet (avoids the large amount of changes this new mostly
untested/unproven tulip/trollius framework introduces). A first step could
be to replace the greenlet layer with asyncio (or similar). Then the
eventlet layer still exists (and all its funky monkey patching). Then as
asyncio matures and is proven as the way to go (it doesn't quite seem to
be this yet, even with guido sponsorship) then we slowly cut out eventlet
and directly replace it with asyncio (the bigger change imho).

Just an idea...

I would sort of expect this to happen in the community anyway; seeing that
a lot of people are used to eventlet (warts and all) so the easiest path
for them to asyncio is to just have have a new layer underneath eventlet
that works with asyncio. Isn't this what the twisted folks are doing (to
retain the twisted API while using asyncio underneath)?

-Original Message-
From: victor stinner victor.stin...@enovance.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Tuesday, February 4, 2014 at 10:07 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet
withasyncio

Hi Joshua,

 Why not just create a eventlet micro version that underneath uses
asyncio?
 Then code isn't riddled with yield from or Return() or similar. It seems
 possible (maybe?) to replace the eventlet hub with one that uses
asyncio?
 This is then a nice change for all those who are using eventlet.

I don't understand what do you expect from replacing the eventlet event
loop with asyncio event loop. It's basically the same.

The purpose of replacing eventlet with asyncio is to get a well defined
control flow, no more surprising task switching at random points.

You may read the report of Guido van Rossum's keynote at Pycon 2013, he
explains probably better than me how asyncio is different from other
asynchronous frameworks:
http://lwn.net/Articles/544522/

For more information, see my notes about asyncio:
http://haypo-notes.readthedocs.org/asyncio.html#talks-about-asyncio

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-02-05 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
2014-02-06!!! 

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Taskflow 0.1.3 release (why, what, when).
- Taskflow 0.2.0 release (why, what, when).
- Documenting and planning and improving cinder integration processes.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, reviews needing help, questions and
answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-05 Thread Joshua Harlow
Any mysql DB drivers (I think the majority of openstack deployments use
mysql?).

How about sqlalchemy (what would possibly need to change there for it to
work)? The pain that I see is that to connect all these libraries into
asyncio they have to invert how they work (sqlalchemy would have to become
asyncio compatible (?), which probably means a big rewrite). This is where
it would be great to have a 'eventlet' like-thing built ontop of asyncio
(letting existing libraries work without rewrites). Eventually I guess
in-time (if tulip succeeds) then this 'eventlet' like-thing could be
removed.

Has there been commitment from library developers to start adjusting there
libraries to follow this new model (openstack has over 100+ dependencies
so each one would seem to have to change, especially if it has any sort of
I/O capabilities)?

-Original Message-
From: victor stinner victor.stin...@enovance.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Wednesday, February 5, 2014 at 3:00 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming:
replace eventletwith asyncio

Hi,

Chris Behrens wrote:
 Interesting thread. I have been working on a side project that is a
 gevent/eventlet replacement [1] that focuses on thread-safety and
 performance. This came about because of an outstanding bug we have with
 eventlet not being Thread safe. (We cannot safely enable thread pooling
for
 DB calls so that they will not block.)

There are DB drivers compatible with asyncio: PostgreSQL, MongoDB, Redis
and memcached.

There is also a driver for ZeroMQ which can be used in Oslo Messaging to
have a more efficient (asynchronous) driver.

There also many event loops for: gevent (geventreactor, gevent3),
greenlet, libuv, GLib and Tornado.

See the full list:
http://code.google.com/p/tulip/wiki/ThirdParty

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Joshua Harlow
Has there been any investigation into heat.

Heat has already used parts of the coroutine approach (for better or
worse).

An example: 
https://github.com/openstack/heat/blob/master/heat/engine/scheduler.py#L230


Decorator for a task that needs to drive a subtask.

This is essentially a replacement for the Python 3-only yield from
keyword (PEP 380), using the yield keyword that is supported in
Python 2. For example::



I bet trollius would somewhat easily replace a big piece of that code.

-Josh

-Original Message-
From: victor stinner victor.stin...@enovance.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 1:55 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet
with asyncio

Sean Dague wrote:
 First, very cool!

Thanks.

 This is very promising work. It might be really interesting to figure
 out if there was a smaller project inside of OpenStack that could be
 test ported over to this (even as a stackforge project), and something
 we could run in the gate.

Oslo Messaging is a small project, but it's more a library. For a full
daemon, my colleague Mehdi Abaakouk has a proof-on-concept for Ceilometer
replacing eventlet with asyncio. Mehdi told me that he doesn't like to
debug eventlet race conditions :-)

 Our experience is the OpenStack CI system catches bugs in libraries and
 underlying components that no one else catches, and definitely getting
 something running workloads hard on this might be helpful in maturing
 Trollius. Basically coevolve it with a piece of OpenStack to know that
 it can actually work on OpenStack and be a viable path forward.

Replacing eventlet with asyncio is a huge change. I don't want to force
users to use it right now, nor to do the change in one huge commit. The
change will be done step by step, and when possible, optional. For
example, in Olso Messaging, you can choose the executor: eventlet or
blocking (and I want to add asyncio).

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Joshua Harlow
Its a good question, I see openstack as mostly like the following 2 groups of 
applications.

Group 1:

API entrypoints using [apache/nginx]+wsgi (nova-api, glance-api…)

In this group we can just let the underlying framework/app deal with the 
scaling and just use native wsgi as it was intended. Scale more [apache/nginx] 
if u need more requests per second. For any kind of long term work these apps 
should be dropping all work to be done on a MQ and letting someone pick that 
work up to be finished in some future time.

Group 2:

Workers that pick things up off MQ. In this area we are allowed to be a little 
more different and change as we want, but it seems like the simple approach we 
have been doing is the daemon model (forking N child worker processes). We've 
also added eventlet in these children (so it becomes more like NxM where M is 
the number of greenthreads). For the usages where workers are used has it been 
beneficial to add those M greenthreads? If we just scaled out more N 
(processes) how bad would it be? (I don't have the answers here actually, but 
it does make you wonder why we couldn't just eliminate eventlet/asyncio 
altogether and just use more N processes).

-Josh

From: Yuriy Taraday yorik@gmail.commailto:yorik@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 10:06 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet with 
asyncio

Hello.


On Tue, Feb 4, 2014 at 5:38 PM, victor stinner 
victor.stin...@enovance.commailto:victor.stin...@enovance.com wrote:
I would like to replace eventlet with asyncio in OpenStack for the asynchronous 
programming. The new asyncio module has a better design and is less magical. 
It is now part of python 3.4 arguably becoming the de-facto standard for 
asynchronous programming in Python world.

I think that before doing this big move to yet another asynchronous framework 
we should ask the main question: Do we need it? Why do we actually need async 
framework inside our code?
There most likely is some historical reason why (almost) every OpenStack 
project runs every its process with eventlet hub, but I think we should 
reconsider this now when it's clear that we can't go forward with eventlet 
(because of py3k mostly) and we're going to put considerable amount of 
resources into switching to another async framework.

Let's take Nova for example.

There are two kinds of processes there: nova-api and others.

- nova-api process forks to a number of workers listening on one socket and 
running a single greenthread for each incoming request;
- other services (workers) constantly poll some queue and spawn a greenthread 
for each incoming request.

Both kinds to basically the same job: receive a request, run a handler in a 
greenthread. Sounds very much like a job for some application server that does 
just that and does it good.
If we remove all dependencies from eventlet or any other async framework, we 
would not only be able to write Python code without need to keep in mind that 
we're running in some reactor (that's why eventlet was chosen over Twisted 
IIRC), but we can also forget about all these frameworks altogether.

I suggest approach like this:
- for API services use dead-simple threaded WSGI server (we have one in the 
stdlib by the way - in wsgiref);
- for workers use simple threading-based oslo.messaging loop (it's on its way).

Of course, it won't be production-ready. Dumb threaded approach won't scale but 
we don't have to write our own scaling here. There are other tools around to do 
this: Apache httpd, Gunicorn, uWSGI, etc. And they will work better in 
production environment than any code we write because they are proven with time 
and on huge scales.

So once we want to go to production, we can deploy things this way for example:
- API services can be deployed within Apache server or any other HTTP server 
with WSGI backend (Keystone already can be deployed within Apache);
- workers can be deployed in any non-HTTP application server, uWSGI is a great 
example of one that can work in this mode.

With this approach we can leave the burden of process management, load 
balancing, etc. to the services that are really good at it.

What do you think about this?

--

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] bp: glance-snapshot-tasks

2014-02-06 Thread Joshua Harlow
Hi alex,

I think u are referring to the following: 
https://blueprints.launchpad.net/nova/+spec/glance-snapshot-tasks

Can u describe the #2 part in more detail. Do some of the drivers already 
implement these new steps?

The goal I think u are having is to make the snapshot functionality resume 
better and cleanup better right? In the end this will even allow for resuming 
if nova-compute (the process that does the snapshot) crashes/is restarted… Just 
wanted to make sure people understand the larger goal here (without having to 
read the whole blueprint, which might be wordy, haha).

-Josh

From: Alexander Gorodnev gorod...@gmail.commailto:gorod...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 1:57 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] bp: glance-snapshot-tasks

Hi,

A blue print was created and Joshua even wrote quite huge text. Right now this 
BP in Drafting stage, so I want to bring this BP to life and continue working 
on the topic. I even tried to make some changes without approvement (only just 
as experiment) and got negative feedbacks.
These steps I did when tried to implement this BP:

1) Moved snapshot functionality from Compute to Conductor (as I understood it's 
the best place for such things, need clarification);
Even this step should be done in two steps:
a) Add snapshot_instance() method to Conductor that just calls the same method 
from Compute;
b) After that move all error-handling / state transition / etc  logic from 
Compute to Conductor. Compute exposes API for drivers (see step 2);

2) The hardest part is a common, convenient, complete API for drivers. Most 
drivers do  almost the same things in the snapshot method:
a) Goes to Glance and registers new image there;
b) Makes snapshot;
c) Downloads the image to the Glance;
d) Clean temporary files;

I would really appreciate any thoughts and questions.

Thanks,
Alexander
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Joshua Harlow
+1 lots of respect for zane in doing this :)

I'm still very much interested in seeing how we can connect taskflow in to
your model.

I think the features that you guys were wanting (remote workers) are
showing up and hopefully will be all they can be!

It helps (imho) that taskflow doesn't connect itself to one model
(threaded, asyncio, yielding) since its model is tasks, dependencies, and
the order in which those are defined which is separate from how it runs
(via engines). Engines are allowed to and encouraged to use threads
underneath, asyncio or greenthreads, or remote workers


-Original Message-
From: Clint Byrum cl...@fewbar.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 12:46 PM
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace
eventletwith asyncio

All due respect to Zane who created the scheduler. We simply could not
do what we do without it (and I think one of the first things I asked
for was parallel create ;).

IMO it is the single most confusing thing in Heat whenever one has to
deal with it. If we could stick to a threading model instead, I would
much prefer that.

Excerpts from Joshua Harlow's message of 2014-02-06 10:22:24 -0800:
 Has there been any investigation into heat.
 
 Heat has already used parts of the coroutine approach (for better or
 worse).
 
 An example: 
 
https://github.com/openstack/heat/blob/master/heat/engine/scheduler.py#L2
30
 
 
 Decorator for a task that needs to drive a subtask.
 
 This is essentially a replacement for the Python 3-only yield from
 keyword (PEP 380), using the yield keyword that is supported in
 Python 2. For example::
 
 
 
 I bet trollius would somewhat easily replace a big piece of that code.
 
 -Josh
 
 -Original Message-
 From: victor stinner victor.stin...@enovance.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Thursday, February 6, 2014 at 1:55 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet
 with asyncio
 
 Sean Dague wrote:
  First, very cool!
 
 Thanks.
 
  This is very promising work. It might be really interesting to figure
  out if there was a smaller project inside of OpenStack that could be
  test ported over to this (even as a stackforge project), and
something
  we could run in the gate.
 
 Oslo Messaging is a small project, but it's more a library. For a full
 daemon, my colleague Mehdi Abaakouk has a proof-on-concept for
Ceilometer
 replacing eventlet with asyncio. Mehdi told me that he doesn't like to
 debug eventlet race conditions :-)
 
  Our experience is the OpenStack CI system catches bugs in libraries
and
  underlying components that no one else catches, and definitely
getting
  something running workloads hard on this might be helpful in maturing
  Trollius. Basically coevolve it with a piece of OpenStack to know
that
  it can actually work on OpenStack and be a viable path forward.
 
 Replacing eventlet with asyncio is a huge change. I don't want to force
 users to use it right now, nor to do the change in one huge commit. The
 change will be done step by step, and when possible, optional. For
 example, in Olso Messaging, you can choose the executor: eventlet or
 blocking (and I want to add asyncio).
 
 Victor
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] several oslo library repositories have moved

2014-02-07 Thread Joshua Harlow
+1 thanks guys!

From: Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 7, 2014 at 1:02 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] several oslo library repositories have moved

The Oslo program has adopted cliff, pycadf, stevedore, and taskflow, promoting 
them from stackforge, so we can establish symmetric gating with the rest of 
OpenStack. In order to do that, the git repositories were moved from 
stackforge/* to openstack/* 
(git://git.openstack.org/stackforge/taskflowhttp://git.openstack.org/stackforge/taskflow
 becomes 
git://git.openstack.org/openstack/taskflowhttp://git.openstack.org/openstack/taskflow,
 etc.). This is only the first step, so the testing is not in place, yet.

We have also renamed oslo.sphinx to oslosphinx in order to move the 
documentation theme out of the namespace package oslo used by production 
code. For a history of that change, please refer to the mailing list thread 
linked in the bug https://bugs.launchpad.net/oslo/+bug/1277168. We have already 
released a library under the new name, and will be working with projects to 
update their documentation builds before removing the old library from PyPI.

If you have a working copy of one of these libraries, please update your git 
remotes for origin and gerrit (git remote set-url) or check out fresh copies.

Thanks to jeblair, fungi, and the rest of the infra team for orchestrating a 
smooth transition today!

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

2014-02-11 Thread Joshua Harlow
Is murano python3.x compatible, from what I understand oslo.messaging isn't 
(yet). If murano is supporting python3.x then brining in oslo.messaging might 
make it hard for murano to be 3.x compatible. Maybe not a problem (I'm not sure 
of muranos python version support).

From: Serg Melikyan smelik...@mirantis.commailto:smelik...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 11, 2014 at 5:05 AM
To: OpenStack Development Mailing List 
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
Subject: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

oslo.messaginghttp://github.com/openstack/oslo.messaging is a library that 
provides RPC and Notifications API, they are part of the same library for 
mostly historical reasons. One of the major goals of oslo.messaging is to 
provide clean RPC and Notification API without any trace of messaging queue 
concepts (but two of most advanced drivers used by oslo.messaging is actually 
based on AMQP: RabbitMQ and QPID).

We were designing Murano on messaging queue concepts using some AMQP/RabbitMQ 
specific features, like queue TTL. Since we never considered communications 
between our components in terms of RPC or Notifications and always thought 
about them as message exchange through broker it has influenced our components 
architecture. In Murano we use simple 
wrapperhttps://github.com/stackforge/murano-common/tree/master/muranocommon/messaging
 around Pukahttps://github.com/majek/puka (RabbitMQ client with most simple 
and thoughtful async model) that is used in all our components. We forked 
Pukahttps://github.com/istalker2/puka since we had specific requirements to 
SSL and could not yet merge our workhttps://github.com/majek/puka/pull/43 
back to master.

Can we abandon our own 
wrapperhttps://github.com/stackforge/murano-common/tree/master/muranocommon/messaging
 around our own fork of Pukahttps://github.com/istalker2/puka in favor of 
oslo.messaging? Yes, but this migration may be tricky. I believe we can migrate 
to oslo.messaging in a week or so.

I had played with oslo.messaging emulating our current communication patterns 
with oslo.messaging, and I am certain that current implementation can be 
migrated to oslo.messaging. But I am not sure that oslo.messaging may be easily 
suited to all future use-cases that we plan to cover in a few next releases 
without major contributions. Please, try to respond with any questions related 
to oslo.messaging implementation and how it can be fitted with certain use-case.

Below, I tried to describe our current use-cases and what specific MQ features 
we are using, how they may be implemented with oslo.messaging and with what 
limitations we will face.

Use-Case
Murano has several components with communications between them based on 
messaging queue:
murano-api - murano-conductor:

  1.  murano-api sends deployment tasks to murano-conductor

murano-conductor - murano-api:

  1.  murano-conductor reports to murano-api task progress during processing
  2.  after processing, murano-conductor sends results to murano-api

murano-conductor - murano-agent:

  1.  during task processing murano-conductor sends execution plans with 
commands to murano-agent.

Note: each of mentioned components above may have more than one instance.

One of great messaging queue specific that we heavily use is a idea of queue 
itself, messages sent to component will be handled any time soon as at least 
one instance would be started. For example, in case of murano-agent, message is 
sent even before murano-agent is started. Another one is queue life-time, we 
control life-time of murano-agent queues to exclude overflow of MQ server with 
queues that is not used anymore.

One thing is also worse to mention: murano-conductor communicates with several 
components at the same time: process several tasks at the same time, during 
task processing murano-conductor sends progress notifications to murano-api and 
execution plans to murano-agent.

Implementation
Please, refer to 
Conceptshttps://wiki.openstack.org/wiki/Oslo/Messaging#Concepts section of 
oslo.messaging Wiki before further reading to grasp key concepts expressed in 
oslo.messaging library. In short, using RPC API we can 'call' server 
synchronously and receive some result, or 'cast' asynchronously (no result is 
returned). Using Notification API we can send Notification to the specified 
Target about happened event with specified event_type, importance and payload.

If we move to oslo.messaging we can only primarily rely on features provided by 
RPC/Notifications model:

  1.  We should not rely on message delivery without other side is properly up 
and running. It is not a message delivery, it is Remote Procedure Call;
  2.  To control queue life-time as we do now, we may be required to 'hack' 
oslo.messaging by writing own 

Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread Joshua Harlow
In taskflow we've done something like the following.

Its not perfect, but it is what currently works (willing to take suggestions on 
better).

We have three different requirements files.

1. Required to work in any manner @ 
https://github.com/openstack/taskflow/blob/master/requirements.txt
2. Optionally required to work (depending on features used) @ 
https://github.com/openstack/taskflow/blob/master/optional-requirements.txt
3. Test requirements (only for testing) @ 
https://github.com/openstack/taskflow/blob/master/test-requirements.txt

Most of the reason for the #2 there is for plugins that taskflow can use (but 
which are not required for all users). Ideally there would be more dynamic way 
to list requirements of libraries and projects, one that actually changes 
depends on the features used/desired to be used. In a way u could imagine a 
function taking in a list of desired features (qpid vs rabbit could be one such 
feature) and that function would return a list of requirements to work with 
this feature set. Splitting it up into these separate files is doing something 
similar (except using static files instead of just a function to determine 
this). I'm not sure if anything existing (or is planned) for making this 
situation better in python (it would be nice if there was a way).

-Josh

From: Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, February 12, 2014 at 1:42 PM
To: Ben Nemec openst...@nemebean.commailto:openst...@nemebean.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [All]Optional dependencies and requirements.txt




On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec 
openst...@nemebean.commailto:openst...@nemebean.com wrote:
Hi all,

This is an issue that has come up recently in tripleo as we try to support more 
varied configurations.  Currently qpid-python is not listed in requirements.txt 
for many of the OpenStack projects, even though they support using Qpid as a 
messaging broker.  This means that when we install from source in tripleo we 
have to dynamically add a line to requirements.txt if we want to use Qpid (we 
pip install -r to handle deps).  There seems to be disagreement over the 
correct way to handle this, so Joe requested on my proposed Nova change that I 
raise the issue here.

There's already some discussion on the bug here: 
https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate Neutron bug 
here: https://bugs.launchpad.net/neutron/+bug/1225232

If there's a better alternative to require all the things I'm certainly 
interested to hear it.  I expect we're going to hit this more in the future as 
we add support for other optional backends for services and such.

We could use a separate requirements file for each driver, following a naming 
convention to let installation tools pick up the right file. For example, 
oslo.messaging might include amqp-requirements.txt, qpid-requirements.txt, 
zmq-requirements.txt, etc.

That would complicate the global requirements sync script, but not terribly so.

Thoughts?

Doug



Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread Joshua Harlow
Interesting, is that supported in pbr (since this is whats being used in
this situation at least for requirements).

-Original Message-
From: David Koo kpublicm...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Wednesday, February 12, 2014 at 7:35 PM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [All]Optional dependencies and
requirements.txt


 Most of the reason for the #2 there is for plugins that taskflow can
 use (but which are not required for all users). Ideally there would be
 more dynamic way to list requirements of libraries and projects, one
 that actually changes depends on the features used/desired to be used.

Disclaimer: Not a pip/setuptools developer - only a user.

From the pip  setuptools docs, the extras feature[1] of these tools
seems to do what you want:

SQLAlchemy=0.7.99,=0.9.99 [SQLAlchemyPersistence]
alembic=0.4.1 [SQLAlchemyPersistence]
...

[1] http://www.pip-installer.org/en/1.1/requirements.html

--
Koo

On Thu, 13 Feb 2014 02:37:44 +
Joshua Harlow harlo...@yahoo-inc.com wrote:

 In taskflow we've done something like the following.
 
 Its not perfect, but it is what currently works (willing to take
 suggestions on better).
 
 We have three different requirements files.
 
 1. Required to work in any manner @
 https://github.com/openstack/taskflow/blob/master/requirements.txt 2.
 Optionally required to work (depending on features used) @
 
https://github.com/openstack/taskflow/blob/master/optional-requirements.t
xt
 3. Test requirements (only for testing) @
 https://github.com/openstack/taskflow/blob/master/test-requirements.txt
 
 Most of the reason for the #2 there is for plugins that taskflow can
 use (but which are not required for all users). Ideally there would
 be more dynamic way to list requirements of libraries and projects,
 one that actually changes depends on the features used/desired to be
 used. In a way u could imagine a function taking in a list of desired
 features (qpid vs rabbit could be one such feature) and that function
 would return a list of requirements to work with this feature set.
 Splitting it up into these separate files is doing something similar
 (except using static files instead of just a function to determine
 this). I'm not sure if anything existing (or is planned) for making
 this situation better in python (it would be nice if there was a way).
 
 -Josh
 
 From: Doug Hellmann
 doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions)
 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.o
rg
 Date: Wednesday, February 12, 2014 at 1:42 PM To: Ben Nemec
 openst...@nemebean.commailto:openst...@nemebean.com, OpenStack
 Development Mailing List (not for usage questions)
 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.o
rg
 Subject: Re: [openstack-dev] [All]Optional dependencies and
 requirements.txt
 
 
 
 
 On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec
 openst...@nemebean.commailto:openst...@nemebean.com wrote: Hi all,
 
 This is an issue that has come up recently in tripleo as we try to
 support more varied configurations.  Currently qpid-python is not
 listed in requirements.txt for many of the OpenStack projects, even
 though they support using Qpid as a messaging broker.  This means
 that when we install from source in tripleo we have to dynamically
 add a line to requirements.txt if we want to use Qpid (we pip install
 -r to handle deps).  There seems to be disagreement over the correct
 way to handle this, so Joe requested on my proposed Nova change that
 I raise the issue here.
 
 There's already some discussion on the bug here:
 https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate
 Neutron bug here: https://bugs.launchpad.net/neutron/+bug/1225232
 
 If there's a better alternative to require all the things I'm
 certainly interested to hear it.  I expect we're going to hit this
 more in the future as we add support for other optional backends for
 services and such.
 
 We could use a separate requirements file for each driver, following
 a naming convention to let installation tools pick up the right file.
 For example, oslo.messaging might include amqp-requirements.txt,
 qpid-requirements.txt, zmq-requirements.txt, etc.
 
 That would complicate the global requirements sync script, but not
 terribly so.
 
 Thoughts?
 
 Doug
 
 
 
 Thanks.
 
 -Ben
 
 ___
 OpenStack-dev mailing list
 
OpenStack-dev@lists.openstack.orgmailto:openstack-...@lists.openstack.or
g
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] Gamification and on-boarding ...

2014-02-14 Thread Joshua Harlow
+1

Mentoring and devoted mentors and not demotivating new folks (but instead 
growing and fostering them) is IMHO 10x more important than a badge program. 
Badges seem nice and all but I think it's not the biggest win for the buck.

Sent from my really tiny device...

 On Feb 13, 2014, at 6:06 AM, Sean Dague s...@dague.net wrote:
 
 On 02/13/2014 05:37 AM, Thierry Carrez wrote:
 Sandy Walsh wrote:
 The informal OpenStack motto is automate everything, so perhaps we should 
 consider some form of gamification [1] to help us? Can we offer badges, 
 quests and challenges to new users to lead them on the way to being strong 
 contributors?
 
 Fixed your first bug badge
 Updated the docs badge
 Got your blueprint approved badge
 Triaged a bug badge
 Reviewed a branch badge
 Contributed to 3 OpenStack projects badge
 Fixed a Cells bug badge
 Constructive in IRC badge
 Freed the gate badge
 Reverted branch from a core badge
 etc.
 
 I think that works if you only keep the ones you can automate.
 Constructive in IRC for example sounds a bit subjective to me, and you
 don't want to issue those badges one-by-one manually.
 
 Second thing, you don't want the game to start polluting your bug
 status, i.e. people randomly setting bugs to triaged to earn the
 Triaged a bug badge. So the badges we keep should be provably useful ;)
 
 A few other suggestions:
 Found a valid security issue (to encourage security reports)
 Fixed a bug submitted by someone else (to encourage attacking random bugs)
 Removed code (to encourage tech debt reduction)
 Backported a fix to a stable branch (to encourage backporting)
 Fixed a bug that was tagged nobody-wants-to-fix-this-one (to encourage
 people to attack critical / hard bugs)
 
 We might need protected tags to automate this: tags that only some
 people could set to bugs/tasks to designate gate-freeing or
 nobody-wants-to-fix-this-one bugs that will give you badges if you fix
 them.
 
 So overall it's a good idea, but it sounds a bit tricky to automate it
 properly to avoid bad side-effects.
 
 Gamification is a cool idea, if someone were to implement it, I'd be +1.
 
 Realistically, the biggest issue I see with on-boarding is mentoring
 time. Especially with folks completely new to our structure, there is a
 lot of confusing things going on. And OpenStack is a ton to absorb. I
 get pinged a lot on IRC, answer when I can, and sometimes just have to
 ignore things because there are only so many hours in the day.
 
 I think Anita has been doing a great job with the Neutron CI onboarding
 and new folks, and that's given me perspective on just how many
 dedicated mentors we'd need to bring new folks on. With 400 new people
 showing up each release, it's a lot of engagement time. It's also
 investment in our future, as some of these folks will become solid
 contributors and core reviewers.
 
 So it seems like the only way we'd make real progress here is to get a
 chunk of people to devote some dedicated time to mentoring in the next
 cycle. Gamification might be most useful, but honestly I expect a Start
 Here page with the consolidated list of low-hanging-fruit bugs, and a
 Review Here page with all reviews for low hanging fruit bugs (so they
 don't get lost by core review team) would be a great start.
 
 The delays on reviews for relatively trivial fixes I think is something
 that is probably more demotivating to new folks than the lack of badges.
 So some ability to keep on top of that I think would be really great.
 
-Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-14 Thread Joshua Harlow
An honest question,

U are mentioning what appears to be the basis for a full programming language 
(variables, calling other workflows - similar to functions) but then u mention 
this is being stuffed into yaml.

Why?

It appears like u might as well spend the effort and define a grammar and 
simplistic language that is not stuffed inside yaml. Shoving one into yaml 
syntax seems like it gets u none of the benefits of syntax checking, parsing, 
validation (highlighting...) and all the pain of yaml.

Something doesn't seem right about the approach of creating languages inside 
the yaml format (in a way it becomes like xsl, yet xsl at least has a spec and 
is well defined).

My 2 cents

Sent from my really tiny device...

On Feb 14, 2014, at 7:22 PM, Alexander Tivelkov 
ativel...@mirantis.commailto:ativel...@mirantis.com wrote:


Hi folks,


Murano matures, and we are getting more and more feedback from our early 
adopters. The overall reception is very positive, but at the same time there 
are some complaints as well. By now the most significant complaint is is hard 
to write workflows for application deployment and maintenance.

Current version of workflow definition markup really have some design drawbacks 
which limit its potential adoption. They are caused by the fact that it was 
never intended for use for Application Catalog use-cases.


I'll briefly touch these drawbacks first:

  1.  Murano's workflow engine is actually a state machine, however the 
workflow markup does not explicitly define the states and transitions.
  2.  There is no data isolation within any environment, which causes both 
potential security vulnerabilities and unpredictable workflow behaviours.
  3.  There is no easy way to reuse the workflows and their related procedures 
between several applications.
  4.  The markup uses JSONPath, which relies on Python’s 'eval' function. This 
is insecure and has to be avoided.
  5.  5. The workflow markup is XML-based, which is not a common practice in 
the OpenStack community.

So, it turns out that we have to design and implement a new workflow definition 
notation, which will not have any of the issues mentioned above.

At the same time, it should still allow to fully specify the configuration of 
any third-party Application, its dependencies with other Applications and 
define specific actions which are required for Application deployment, 
configuration and life cycle management.

This new notation should allow to do the following:


  *   List all the required configuration parameters and dependencies for a 
given application

  *   Validate user input and match it to the defined parameters

  *   Define specific deployment actions and their execution order

  *   Define behaviors to handle the events of changes in application’s 
environment


Also, it should satisfy the following requirements:


  *   Minimize the amount of configuration for common application parts, i.e. 
reuse existing configuration parts and add only difference specific to the 
application.

  *   Allow to use different deployment tools with using the same markup 
constructs. i.e. provide a high-level abstraction on the underlying tools 
(heat, shell, chef, puppet etc)

  *   For security reasons it should NOT allow to execute arbitrary operations 
- i.e. should allow to run only predefined set of meaningful configuration 
actions.



So, I would suggest to introduce a simple and domain specific notation which 
would satisfy these needs:

  *   Application dependencies and configuration properties are defined 
declaratively, in a way similar to how it is done in Heat templates.

  *   Each property has special constraints and rules, allowing to validate the 
input and applications relationship within the environment.

  *   The workflows are defined in imperative way: as a sequence of actions or 
method calls. This may include assigning data variables or calling the 
workflows of other applications.

  *   All of these may be packaged in a YAML format. The example may look 
something like this [1]


The final version may become a bit more complicated, but as the starting point 
this should look fine. I suggest to cover this in more details on our next IRC 
meeting on Tuesday.


Any feedback or suggestions are appreciated.



[1] https://etherpad.openstack.org/p/murano-new-dsl-example

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Storing license information in openstack/requirements

2014-02-17 Thread Joshua Harlow
 +1 for yaml instead of shoving all kinds of package metadata in comments.

Sent from my really tiny device...

 On Feb 17, 2014, at 4:12 PM, Sean Dague s...@dague.net wrote:
 
 Honestly, if we are going to track this, we should probably do the set
 of things that reviewers tend to do when running through these.
 
 License:
 Upstream Location:
 Ubuntu/Debian Package: Y/N? (url)
 Fedora Package: Y/N? (url)
 Suse Package: Y/N? (url)
 Last Release: Date (in case of abandonware)
 Python 3 support: Y/N? (informational only)
 
 I'd honestly stick that in a yaml file instead, and have something
 sanity check it on new requirements add.
 
-Sean
 
 On 02/17/2014 11:42 AM, Doug Hellmann wrote:
 
 
 
 On Mon, Feb 17, 2014 at 11:01 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:
 
Hi everyone,
 
A year ago there was a discussion about doing a license inventory on
OpenStack dependencies, to check that they are compatible with our own
license and make sure any addition gets a proper license check.
 
Back then I proposed to leverage the openstack/requirements repository
to store that information, but that repository was still science-fiction
at that time. Now that it's complete and functional, I guess it's time
to revisit the idea.
 
Should we store licensing information as a comment in the *-requirements
files ? Can it be stored on the same line ? Something like:
 
oslo.messaging=1.3.0a4  # Apache-2.0
 
 
 Tracking it in that file makes sense; I'm not sure about the proposed
 syntax. Do we need to track any other info? Do we need to process the
 license automatically to generate a report or something, where we would
 want to know that the value was a license string and not a note of some
 other sort?
 
 It does look like the requirements file parser will ignore trailing
 comments.
 
 Doug
 
 
 
--
Thierry Carrez (ttx)
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][glance] Question about evacuate with no shared storage..

2014-02-21 Thread Joshua Harlow
That requires ssh (or some tunnel/other RPC?) connections from-to all
hypervisors to work correct??

Is that allowed in your organization (headless ssh keys from-to all
hypervisors)?

Isn't that a huge security problem if someone manages to break out of a VM
and get access to those keys?

If I was a hacker and I could initiate those calls, bitcoin mining +1 ;)

-Original Message-
From: Joe Gordon joe.gord...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Friday, February 21, 2014 at 9:38 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][glance] Question about evacuate with
no shared storage..

On Thu, Feb 20, 2014 at 9:01 PM, Sangeeta Singh sin...@yahoo-inc.com
wrote:
 Hi,

 At my organization we do not use a shared storage for VM disks  but
need to
 evacuate VMs  from a HV that is down or having problems to another HV.
The
 evacuate command only allows the evacuated VM to have the base image.
What I
 am interested in is to create a snapshot of the VM on the down HV and
then
 be able to use the evacuate command by specifying the snapshot for the
 image.

libvirt supports live migration without any shared storage. TripleO
has been testing it out using this patch
https://review.openstack.org/#/c/74600/


 Has anyone had such a use case? Is there a command that uses snapshots
in
 this way to recreate VM on a new HV.

 Thanks for the pointers.

 Sangeeta

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-26 Thread Joshua Harlow
So this design is starting to look pretty familiar to a what we have in 
taskflow.

Any reason why it can't just be used instead?

https://etherpad.openstack.org/p/TaskFlowWorkerBasedEngine

This code is in a functional state right now, using kombu (for the moment, 
until oslo.messaging becomes py3 compliant).

The concept of a engine which puts messages on a queue for a remote executor is 
in-fact exactly the case taskflow is doing (the remote exeuctor/worker will 
then respond when it is done and the engine will then initiate the next piece 
of work to do) in the above listed etherpad (and which is implemented).

Is it the case that in mistral the engine will be maintaining the 
'orchestration' of the workflow during the lifetime of that workflow? In the 
case of mistral what is an engine server? Is this a server that has engines in 
it (where each engine is 'orchestrating' the remote/local workflows and 
monitoring and recording the state transitions and data flow that is 
occurring)? The details @ 
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
 seems to be already what taskflow provides via its engine object, creating a 
application which runs engines and those engines initiate workflows is made to 
be dead simple.

From previous discussions with the mistral folks it seems like the overlap 
here is getting more and more, which seems to be bad (and means something is 
broken/wrong). In fact most of the concepts that u have blueprints for have 
already been completed in taskflow (data-flow, engine being disconnected from 
the rest api…) and ones u don't have listed (resumption, reversion…).

What can we do to fix this situation?

-Josh

From: Nikolay Makhotkin 
nmakhot...@mirantis.commailto:nmakhot...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 25, 2014 at 11:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Looks good. Thanks, Winson!

Renat, What do you think?


On Wed, Feb 26, 2014 at 10:00 AM, W Chan 
m4d.co...@gmail.commailto:m4d.co...@gmail.com wrote:
The following link is the google doc of the proposed engine/executor message 
flow architecture.  
https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

The diagram on the right is the scalable engine where one or more engine sends 
requests over a transport to one or more executors.  The executor client, 
transport, and executor server follows the RPC client/server design 
patternhttps://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/rpc
 in oslo.messaging.

The diagram represents the local engine.  In reality, it's following the same 
RPC client/server design pattern.  The only difference is that it'll be 
configured to use a 
fakehttps://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_fake.py
 RPC backend driver.  The fake driver uses in process 
queueshttp://docs.python.org/2/library/queue.html#module-Queue shared between 
a pair of engine and executor.

The following are the stepwise changes I will make.
1) Keep the local and scalable engine structure intact.  Create the Executor 
Client at ./mistral/engine/scalable/executor/client.py.  Create the Executor 
Server at ./mistral/engine/scalable/executor/service.py and implement the task 
operations under ./mistral/engine/scalable/executor/executor.py.  Delete 
./mistral/engine/scalable/executor/executor.py.  Modify the launcher 
./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py to 
use the Executor Client instead of sending the message directly to rabbit via 
pika.  The sum of this is the atomic change that keeps existing structure and 
without breaking the code.
2) Remove the local engine. 
https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
3) Implement versioning for the engine.  
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
4) Port abstract engine to use oslo.messaging and implement the engine client, 
engine server, and modify the API layer to consume the engine client. 
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process.

Winson


On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

On 25 Feb 2014, at 02:21, W Chan 
m4d.co...@gmail.commailto:m4d.co...@gmail.com wrote:

Renat,

Regarding your comments on change https://review.openstack.org/#/c/75609/, I 
don't think the port to oslo.messaging is just a swap from pika to 
oslo.messaging.  OpenStack services as I understand is usually implemented as 
an RPC client/server over a messaging transport.  Sync vs async calls are done 
via the RPC client call and cast 

[openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-02-26 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
2014-02-27!!! 

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow
Docs: http://docs.openstack.org/developer/taskflow


## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss 0.2 progress, features, bugs, release date...
- Discuss attending oslo meetings (perhaps collapsing this meeting into
that)?
- Discuss what might be possible/desirable/useful for 0.3.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, reviews needing help, questions and
answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Joshua Harlow
+1

I'd rather have good and supported clients that satisfy peoples needs
rather than nit-pick or remove v3 or something else...

Forward progress is good so I'll throw my hat for v3, but I doubt many
people that use the API's care (as long as their friendly client library
still works).

Make it discoverable, let said persons library of choice find a matching
version (or not find one), and profit.

-Original Message-
From: Monty Taylor mord...@inaugust.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Thursday, February 27, 2014 at 3:30 PM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Future of the Nova API

Sorry for the top post. I was asked to look at this thread and toss in
my €0.02 but I do not believe I could possibly read this whole thread -
I'm too jet lagged. So...

As a current large-scale production consumer of the API, I don't care at
all as long as python-novaclient keeps working at the Python Library API
level. I do  not care about the REST api specifics at all. I would
imagine that most Java folks only care that jclouds works. I would
imagine that most Ruby users would only care that fog works. Since
jclouds already supports a billion apis, I doubt supporting v2 and v3
would be hard for them.

AND

As long as the v2/v3 switch is discoverable by the library and I don't,
as a consumer of the library, need to know about it - so that
python-novaclient will continue to support client operations on both
REST versions - I'm fine with that - because I want to continue to be
able to operate the old Diablo-based HP cloud, the new trunk HP cloud,
the trunk Rackspace cloud and the TripleO cloud with the same scripts.

That's what I want. I'm sure other people want other things.

On 02/24/2014 07:50 AM, Christopher Yeoh wrote:
 Hi,

 There has recently been some speculation around the V3 API and whether
 we should go forward with it or instead backport many of the changes
 to the V2 API. I believe that the core of the concern is the extra
 maintenance and test burden that supporting two APIs means and the
 length of time before we are able to deprecate the V2 API and return
 to maintaining only one (well two including EC2) API again.

 This email is rather long so here's the TL;DR version:

 - We want to make backwards incompatible changes to the API
and whether we do it in-place with V2 or by releasing V3
we'll have some form of dual API support burden.
- Not making backwards incompatible changes means:
  - retaining an inconsistent API
  - not being able to fix numerous input validation issues
  - have to forever proxy for glance/cinder/neutron with all
the problems that entails.
- Backporting V3 infrastructure changes to V2 would be a
  considerable amount of programmer/review time

 - The V3 API as-is has:
- lower maintenance
- is easier to understand and use (consistent).
- Much better input validation which is baked-in (json-schema)
  rather than ad-hoc and incomplete.

 - Whilst we have existing users of the API we also have a lot more
users in the future. It would be much better to allow them to use
the API we want to get to as soon as possible, rather than trying
to evolve the V2 API and forcing them along the transition that they
could otherwise avoid.

 - We already have feature parity for the V3 API (nova-network being
the exception due to the very recent unfreezing of it), novaclient
support, and a reasonable transition path for V2 users.

 - Proposed way forward:
- Release the V3 API in Juno with nova-network and tasks support
- Feature freeze the V2 API when the V3 API is released
  - Set the timeline for deprecation of V2 so users have a lot
of warning
  - Fallback for those who really don't want to move after
deprecation is an API service which translates between V2 and V3
requests, but removes the dual API support burden from Nova.

 End TL;DR.


 Although its late in the development cycle I think its important to
 discuss this now rather than wait until the summit because if we go
 ahead with the V3 API there is exploratory work around nova-network
 that we would like to do before summit and it also affects how we look
 at significant effort applying changes to the V2 API now. I'd also
 prefer to hit the ground running with work we know we need to do in Juno
 as soon as it opens rather than wait until the summit has finished.

 Firstly I'd like to step back a bit and ask the question whether we
 ever want to fix up the various problems with the V2 API that involve
 backwards incompatible changes. These range from inconsistent naming
 through the urls and data expected and returned, to poor and
 inconsistent input validation and removal of all the proxying Nova
 does to cinder, glance and neutron. I believe the answer to this is
 yes - 

Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread Joshua Harlow
Sounds good,

Lets connect, the value of central oslo connected projects is that shared 
libraries == share the pain. Duplicating features and functionality is always 
more pain. In the end we are a community, not silos, so it seems like before 
mistral goes down the path of duplicating more and more features (I understand 
the desire to POC mistral and learn what mistral wants to become, and all that) 
that we should start the path to working together. I personally am worried that 
mistral will start to apply for incubation and then the question will come up 
as to this (mistral was doing POC, kept on doing POC, never came back to using 
common libraries, and then gets asked why this happened).

I'd like to make us all successful, and as a old saying goes,

“A single twig breaks, but the bundle of twigs is strong”, openstack needs to 
be a cohesive bundle and not a single twig ;)

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 28, 2014 at 6:31 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Hi Joshua,

Sorry, I’ve been very busy for the last couple of days and didn’t respond 
quickly enough.

Well, first of all, it’s my bad that I’ve not been following TaskFlow progress 
for a while and, honestly, I just need to get more info on the current TaskFlow 
status. So I’ll do that and get back to you soon. As you know, there were 
reasons why we decided to go this path (without using TaskFlow) but I’ve always 
thought we will be able to align our efforts as we move forward once 
requirements and design of Mistral become more clear. I really want to use 
TaskFlow for Mistral implementation. We just need to make sure that it will 
bring more value than pain (sorry if it sounds harsh).

Thanks for your feedback and this info. We’ll get in touch with you soon.

Renat Akhmerov
@ Mirantis Inc.



On 27 Feb 2014, at 03:22, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

So this design is starting to look pretty familiar to a what we have in 
taskflow.

Any reason why it can't just be used instead?

https://etherpad.openstack.org/p/TaskFlowWorkerBasedEngine

This code is in a functional state right now, using kombu (for the moment, 
until oslo.messaging becomes py3 compliant).

The concept of a engine which puts messages on a queue for a remote executor is 
in-fact exactly the case taskflow is doing (the remote exeuctor/worker will 
then respond when it is done and the engine will then initiate the next piece 
of work to do) in the above listed etherpad (and which is implemented).

Is it the case that in mistral the engine will be maintaining the 
'orchestration' of the workflow during the lifetime of that workflow? In the 
case of mistral what is an engine server? Is this a server that has engines in 
it (where each engine is 'orchestrating' the remote/local workflows and 
monitoring and recording the state transitions and data flow that is 
occurring)? The details @ 
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
 seems to be already what taskflow provides via its engine object, creating a 
application which runs engines and those engines initiate workflows is made to 
be dead simple.

From previous discussions with the mistral folks it seems like the overlap 
here is getting more and more, which seems to be bad (and means something is 
broken/wrong). In fact most of the concepts that u have blueprints for have 
already been completed in taskflow (data-flow, engine being disconnected from 
the rest api…) and ones u don't have listed (resumption, reversion…).

What can we do to fix this situation?

-Josh

From: Nikolay Makhotkin 
nmakhot...@mirantis.commailto:nmakhot...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 25, 2014 at 11:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Looks good. Thanks, Winson!

Renat, What do you think?


On Wed, Feb 26, 2014 at 10:00 AM, W Chan 
m4d.co...@gmail.commailto:m4d.co...@gmail.com wrote:
The following link is the google doc of the proposed engine/executor message 
flow architecture.  
https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

The diagram on the right is the scalable engine where one or more engine sends 
requests over a transport to one or more executors.  The executor client, 
transport

Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread Joshua Harlow
Convection? Afaik u guys are building convection (convection was just an idea, 
I see mistral as the POC/impl) ;)

https://wiki.openstack.org/wiki/Convection#NOTICE:_Similar_project_-.3E_Mistral

So questions around taskflow:

  1.  Correct u put it in your task, there was previous ideas/work done by the 
team @ https://etherpad.openstack.org/p/BrainstormFlowConditions but from 
previous people that have build said systems it was determined that actually 
there wasn't much use for conditionals being useful (yet). But expression 
evaluation, not sure what that means, being a library, any type of expression 
evaluation is just whatever u can imagine in python. Conditional tasks (and 
such) being managed by taskflows engines we can reconsider  might even be 
possible but this is imho dangerous territory that is being approached, 
expression evaluation and conditional branching and loops is basically a 
language specification ;)
  2.  I don't see taskflow managing a catalog (currently), that seems out of 
scope of a library that provides the execution, resumption parts (any consumer 
of taskflow should be free to define and organize there catalog as they choose).
  3.  Negative, taskflow is a execution and state-management library (not a 
full framework imho) that helps build the upper layers that services like 
mistral can use (or nova, or glance or…). I don't feel its the right place to 
have taskflow force a DSL onto people, since the underlying primitives that can 
form a upper level DSL are more service/app level choices (heat has there DSL, 
mistral has theres, both are fine, and both likely can take advantage of the 
same taskflow execution and state-management primitives to use in there 
service).

Hope that helps :)

-Josh

From: W Chan m4d.co...@gmail.commailto:m4d.co...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 28, 2014 at 12:02 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

All,
This is a great start.  I think the sooner we have this discussion the better.  
Any uncertainty in the direction/architecture here is going to stall progress.  
How about Convection?  What's the status of the Convection project and where 
it's heading?  Should we have similar discussion with the contributors of that 
project?

Joshua,
I have a few questions about TaskFlow.
1) How does it handle conditional loop and expression evaluation for decision 
branching?  I've looked at the Taskflow wiki/code briefly and it's not obvious. 
 I assume it would be logic that user will embed within a task?
2) How about predefined catalog of standard tasks (i.e. REST call, SOAP call, 
Email task, etc.)?  Is that within the scope of Taskflow or up to TaskFlow 
consumers like Mistral?
3) Does TaskFlow have its own DSL?  The examples provided are mostly code based.

Thanks.
Winson




On Fri, Feb 28, 2014 at 10:54 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
Sounds good,

Lets connect, the value of central oslo connected projects is that shared 
libraries == share the pain. Duplicating features and functionality is always 
more pain. In the end we are a community, not silos, so it seems like before 
mistral goes down the path of duplicating more and more features (I understand 
the desire to POC mistral and learn what mistral wants to become, and all that) 
that we should start the path to working together. I personally am worried that 
mistral will start to apply for incubation and then the question will come up 
as to this (mistral was doing POC, kept on doing POC, never came back to using 
common libraries, and then gets asked why this happened).

I'd like to make us all successful, and as a old saying goes,

“A single twig breaks, but the bundle of twigs is strong”, openstack needs to 
be a cohesive bundle and not a single twig ;)

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com

Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 28, 2014 at 6:31 AM

To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Hi Joshua,

Sorry, I’ve been very busy for the last couple of days and didn’t respond 
quickly enough.

Well, first of all, it’s my bad that I’ve not been following TaskFlow progress 
for a while and, honestly, I just need to get more info on the current TaskFlow 
status. So I’ll do that and get back to you soon. As you know, there were 
reasons why we decided to go this path

[openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-03-05 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
2014-03-06!!! 

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow
Docs: http://docs.openstack.org/developer/taskflow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Any open reviews/questions/discussion needed for for 0.2
- Integration progress, help, furthering integration efforts.
- Possibly discuss about worker capability discovery.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, reviews needing help, questions and
answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a Real life workflow

2014-03-06 Thread Joshua Harlow
That sounds a little similar to what taskflow is trying to do (I am of course 
biased).

I agree with letting the native language implement the basics (expressions, 
assignment...) and then building the domain ontop of that. Just seems more 
natural IMHO, and is similar to what linq (in c#) has done.

My 3 cents.

Sent from my really tiny device...

 On Mar 6, 2014, at 5:33 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 
 DSL's are tricky beasts. On one hand I like giving a tool to
 non-developers so they can do their jobs, but I always cringe when the
 DSL reinvents the wheel for basic stuff (compound assignment
 expressions, conditionals, etc).
 
 YAML isn't really a DSL per se, in the sense that it has no language
 constructs. As compared to a Ruby-based DSL (for example) where you
 still have Ruby under the hood for the basic stuff and extensions to the
 language for the domain-specific stuff.
 
 Honestly, I'd like to see a killer object model for defining these
 workflows as a first step. What would a python-based equivalent of that
 real-world workflow look like? Then we can ask ourselves, does the DSL
 make this better or worse? Would we need to expose things like email
 handlers, or leave that to the general python libraries?
 
 $0.02
 
 -S
 
 
 
 On 03/05/2014 10:50 PM, Dmitri Zimine wrote:
 Folks, 
 
 I took a crack at using our DSL to build a real-world workflow. 
 Just to see how it feels to write it. And how it compares with
 alternative tools. 
 
 This one automates a page from OpenStack operation
 guide: 
 http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node
  
 
 Here it is https://gist.github.com/dzimine/9380941
 or here http://paste.openstack.org/show/72741/
 
 I have a bunch of comments, implicit assumptions, and questions which
 came to mind while writing it. Want your and other people's opinions on it. 
 
 But gist and paste don't let annotate lines!!! :(
 
 May be we can put it on the review board, even with no intention to
 check in,  to use for discussion? 
 
 Any interest?
 
 DZ 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Locking: A can of worms

2014-03-07 Thread Joshua Harlow
Tooz folks has been thinking about this problem (as well as myself) for a
little while.

I've started something like: https://review.openstack.org/#/c/71167/

Also: https://wiki.openstack.org/wiki/StructuredWorkflowLocks

Perhaps we can get more movement on that (sorry I haven't had tons of time
to move forward on that review).

Something generic (aka a lock provider that can use different locking
backends to satisfy different desired lock 'requirements') might be useful
for everyone to help avoid these problems? Or at least allow individual
requirements for locks to be managed by a well supported library.

-Original Message-
From: Matthew Booth mbo...@redhat.com
Organization: Red Hat
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Friday, March 7, 2014 at 8:53 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][vmware] Locking: A can of worms

We need locking in the VMware driver. There are 2 questions:

1. How much locking do we need?
2. Do we need single-node or multi-node locking?

I believe these are quite separate issues, so I'm going to try not to
confuse them. I'm going to deal with the first question first.

In reviewing the image cache ageing patch, I came across a race
condition between cache ageing and spawn(). One example of the race is:

Cache Ageingspawn()
* Check timestamps
* Delete timestamp
* Check for image cache directory
* Delete directory
* Use image cache directory

This will cause spawn() to explode. There are various permutations of
this. For example, the following are all possible:

* A simple failure of spawn() with no additional consequences.

* Calling volumeops.attach_disk_to_vm() with a vmdk_path that doesn't
exist. It's not 100% clear to me that ReconfigVM_Task will throw an
error in this case, btw, which would probably be bad.

* Leaving a partially deleted image cache directory which doesn't
contain the base image. This would be really bad.

The last comes about because recursive directory delete isn't atomic,
and may partially succeed, which is a tricky problem. However, in
discussion, Gary also pointed out that directory moves are not atomic
(see MoveDatastoreFile_Task). This is positively nasty. We already knew
that spawn() races with itself to create an image cache directory, and
we've hit this problem in practise. We haven't fixed the race, but we do
manage it. The management relies on the atomicity of a directory move.
Unfortunately it isn't atomic, which presents the potential problem of
spawn() attempting to use an incomplete image cache directory. We also
have the problem of 2 spawns using a linked clone image racing to create
the same resized copy.

We could go through all of the above very carefully to assure ourselves
that we've found all the possible failure paths, and that in every case
the problems are manageable and documented. However, I would place a
good bet that the above is far from a complete list, and we would have
to revisit it in its entirety every time we touched any affected code.
And that would be a lot of code.

We need something to manage concurrent access to resources. In all of
the above cases, if we make the rule that everything which uses an image
cache directory must hold a lock on it whilst using it, all of the above
problems go away. Reasoning about their correctness becomes the
comparatively simple matter of ensuring that the lock is used correctly.
Note that we need locking in both the single and multi node cases,
because even single node is multi-threaded.

The next question is whether that locking needs to be single node or
multi node. Specifically, do we currently, or do we plan to, allow an
architecture where multiple Nova nodes access the same datastore
concurrently. If we do, then we need to find a distributed locking
solution. Ideally this would use the datastore itself for lock
mediation. Failing that, apparently this tool is used elsewhere within
the project:

http://zookeeper.apache.org/doc/trunk/zookeeperOver.html

That would be an added layer of architecture and deployment complexity,
but if we need it, it's there.

If we can confidently say that 2 Nova instances should never be
accessing the same datastore (how about hot/warm/cold failover?), we can
use Nova's internal synchronisation tools. This would simplify matters
greatly!

I think this is one of those areas which is going to improve both the
quality of the driver, and the confidence of reviewers to merge changes.
Right now it takes a lot of brain cycles to work through all the various
paths of a race to work out if any of them are really bad, and it has to
be repeated every time you touch the code. A little up-front effort will
make a whole class of problems go away.

Matt
-- 
Matthew Booth, RHCA, RHCSS
Red Hat 

Re: [openstack-dev] [Mistral] Crack at a Real life workflow

2014-03-07 Thread Joshua Harlow
@ Dmitri ;)

Sure lets catch up, most always I'm on IRC, feel free to ping me.

Or jump into #openstack-state-management (for more good times).

From: Dmitri Zimine d...@stackstorm.commailto:d...@stackstorm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, March 6, 2014 at 10:36 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Crack at a Real life workflow

Folks, thanks for the input!

@Joe:

Hopefully Renat covered the differences.  Yet I am interested in how the same 
workflow can be expressed as Salt state(s) or Ansible playbooks. Can you (or 
someone else who knows them well) take a stub?


@Joshua
I am still new to Mistral and learning, but I think it _is_ relevant to 
taskflow. Should we meet, and you help me catch up? Thanks!

@Sandy:
Aaahr, I used the D word?!  :) I keep on arguing that YAML workflow 
representation doesn't make DSL.

And YES to the object model first to define the workflow, with 
YAML/JSON/PythonDSL/what-else as a syntax to build it. We are having these 
discussions on another thread and reviews.

Basically, in order to make a grammar expressive enough to work across a
web interface, we essentially end up writing a crappy language. Instead,
we should focus on the callback hooks to something higher level to deal
with these issues. Minstral should just say I'm done this task, what
should I do next? and the callback service can make decisions on where
in the graph to go next.

There must be some misunderstanding. Mistral _does follow AWS / BPEL engines 
approach, it is both doing I'm done this task, what should I do next? 
(executor) and callback service (engine that coordinates the flow and keeps 
the state). Like decider and activity workers in AWS Simple Workflow.

Engine maintains the state. Executors run tasks. Object model describes 
workflow as a graph of tasks with transitions, conditions, etc. YAML is one way 
to define a workflow. Nothing controversial :)

@all:

Wether one writes Python code or uses yaml? Depends on the user. There are good 
arguments for YAML. But if it's crappy, it looses. We want to see how it feels 
to write it. To me, mixed feelings so far, but promising. What do you guys 
think?

Comments welcome here:
https://github.com/dzimine/mistral-workflows/commit/d8c4a8c845e9ca49f6ea94362cef60489f2a46a3


DZ


On Mar 6, 2014, at 10:41 AM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:



On 03/06/2014 02:16 PM, Renat Akhmerov wrote:
IMO, it looks not bad (sorry, I’m biased too) even now. Keep in mind this is 
not the final version, we keep making it more expressive and concise.

As for killer object model it’s not 100% clear what you mean. As always, devil 
in the details. This is a web service with all the consequences. I assume what 
you call “object model” here is nothing else but a python binding for the web 
service which we’re also working on. Custom python logic you mentioned will 
also be possible to easily integrate. Like I said, it’s still a pilot stage of 
the project.

Yeah, the REST aspect is where the tricky part comes in :)

Basically, in order to make a grammar expressive enough to work across a
web interface, we essentially end up writing a crappy language. Instead,
we should focus on the callback hooks to something higher level to deal
with these issues. Minstral should just say I'm done this task, what
should I do next? and the callback service can make decisions on where
in the graph to go next.

Likewise with things like sending emails from the backend. Minstral
should just call a webhook and let the receiver deal with active
states as they choose.

Which is why modelling this stuff in code is usually always better and
why I'd lean towards the TaskFlow approach to the problem. They're
tackling this from a library perspective first and then (possibly)
turning it into a service. Just seems like a better fit. It's also the
approach taken by Amazon Simple Workflow and many BPEL engines.

-S


Renat Akhmerov
@ Mirantis Inc.



On 06 Mar 2014, at 22:26, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

That sounds a little similar to what taskflow is trying to do (I am of course 
biased).

I agree with letting the native language implement the basics (expressions, 
assignment...) and then building the domain ontop of that. Just seems more 
natural IMHO, and is similar to what linq (in c#) has done.

My 3 cents.

Sent from my really tiny device...

On Mar 6, 2014, at 5:33 AM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:

DSL's are tricky beasts. On one hand I like giving a tool to
non-developers so they can do their jobs, but I always cringe when the
DSL reinvents the wheel for basic stuff (compound assignment

Re: [openstack-dev] [Mistral] Crack at a Real life workflow

2014-03-07 Thread Joshua Harlow
 wrote:



On 03/06/2014 02:16 PM, Renat Akhmerov wrote:
IMO, it looks not bad (sorry, I’m biased too) even now. Keep in mind this is 
not the final version, we keep making it more expressive and concise.

As for killer object model it’s not 100% clear what you mean. As always, devil 
in the details. This is a web service with all the consequences. I assume what 
you call “object model” here is nothing else but a python binding for the web 
service which we’re also working on. Custom python logic you mentioned will 
also be possible to easily integrate. Like I said, it’s still a pilot stage of 
the project.

Yeah, the REST aspect is where the tricky part comes in :)

Basically, in order to make a grammar expressive enough to work across a
web interface, we essentially end up writing a crappy language. Instead,
we should focus on the callback hooks to something higher level to deal
with these issues. Minstral should just say I'm done this task, what
should I do next? and the callback service can make decisions on where
in the graph to go next.

Likewise with things like sending emails from the backend. Minstral
should just call a webhook and let the receiver deal with active
states as they choose.

Which is why modelling this stuff in code is usually always better and
why I'd lean towards the TaskFlow approach to the problem. They're
tackling this from a library perspective first and then (possibly)
turning it into a service. Just seems like a better fit. It's also the
approach taken by Amazon Simple Workflow and many BPEL engines.

-S


Renat Akhmerov
@ Mirantis Inc.



On 06 Mar 2014, at 22:26, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

That sounds a little similar to what taskflow is trying to do (I am of course 
biased).

I agree with letting the native language implement the basics (expressions, 
assignment...) and then building the domain ontop of that. Just seems more 
natural IMHO, and is similar to what linq (in c#) has done.

My 3 cents.

Sent from my really tiny device...

On Mar 6, 2014, at 5:33 AM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:

DSL's are tricky beasts. On one hand I like giving a tool to
non-developers so they can do their jobs, but I always cringe when the
DSL reinvents the wheel for basic stuff (compound assignment
expressions, conditionals, etc).

YAML isn't really a DSL per se, in the sense that it has no language
constructs. As compared to a Ruby-based DSL (for example) where you
still have Ruby under the hood for the basic stuff and extensions to the
language for the domain-specific stuff.

Honestly, I'd like to see a killer object model for defining these
workflows as a first step. What would a python-based equivalent of that
real-world workflow look like? Then we can ask ourselves, does the DSL
make this better or worse? Would we need to expose things like email
handlers, or leave that to the general python libraries?

$0.02

-S



On 03/05/2014 10:50 PM, Dmitri Zimine wrote:
Folks,

I took a crack at using our DSL to build a real-world workflow.
Just to see how it feels to write it. And how it compares with
alternative tools.

This one automates a page from OpenStack operation
guide: 
http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node

Here it is https://gist.github.com/dzimine/9380941
or here http://paste.openstack.org/show/72741/

I have a bunch of comments, implicit assumptions, and questions which
came to mind while writing it. Want your and other people's opinions on it.

But gist and paste don't let annotate lines!!! :(

May be we can put it on the review board, even with no intention to
check in,  to use for discussion?

Any interest?

DZ


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Mistral] Crack at a Real life workflow

2014-03-08 Thread Joshua Harlow
 that place. You can have your own block constructs. You would not have 
such level of control with any other language

 4. Domain-specific is usually better than general-purposed


Q: Why YAML?

A:
 1. There are 2 possible alternatives: use some existing data format (XML, 
JSON, YAML) or create your own. And your own format for your own language means 
your own parser. And writing a parser for language of that complexity is a 
difficult task that would take months of work. From standard formats YAML is 
clearly most readable one

 2. There are a lot of XML languages. There is BPEL XML language for workflows. 
So there is nothing unusual in having programming language encoded as data 
serialization format. So why not have YAML for the same purpose considering it 
is more readable than XML?

 3. MuranoPL is not only code but also declarations - classes, properties, 
inheritance, methods, contracts etc. YAML is really good for declarations

 4. As for the code there is not so many difference from having you own parser. 
Sure you need to prepend dash (-) before each code line, but in Java/C you 
append semicolon after each like and everyone okay with that. In Python you 
write var = value, in MuranoPL $var: value. You use colon for block constructs 
in Python and in MuranoPL. As you can see there are not so many differences 
from normal Python code as one can expect from not using custom parser. As soon 
as you start writing in MuranoPL you soon forget that this is YAML and treat it 
like a normal language.

5. Because of YAML all JSON/YAML data structures that looks so nice and 
readable in YAML become first class citizens in MuranoPL. You can easily embed 
Heat template of REST API JSON body in DSL code. This even better than in Python

6. Everyone can take MuranoPL class and see what properties it exposes, what 
methods it has as all you need is standard YAML parser

7. YAML is still a data serialization format. So you can work with DSL code as 
with regular data - store in data base, insert code lines and methods, convert 
to other formats etc.

8. You can customize many thing on YAML parser level before it reaches DSL. DSL 
does not deal directly with YAML, only deserialized version of it. Thus you can 
control where those YAMLs are located. You can store them in Glance, database, 
file-system or generate them on the fly without DSL even notice.

9. With YAML you has less to explain (people already know it) and more of a 
tooling and IDE support


Q: Isn't DSL of that power would be a subject to DoS attacks, resource 
starvation attacks etc.?

A: This is a really good question cause nobody asked it yet :) And the answer 
is NO. Here is why:

With MuranoPL you have complete control over everything. You can have time 
quota for DSL code (remember 30 second quota for PHP scrips in typical web 
setup?). Or you can limit DSL scripts to some reasonable number of executed 
instructions. Or count raw CPU time for the script. You can make all loop 
constructs to have upper limit on iterations. You can do the same with YAQL 
functions.  You can enforce timeouts on all I/O operations. MuranoPL uses green 
threads and you can have a lot of them (unlike regular threads) and it is 
possible to limit total number of green threads used for DSL execution. It is 
possible to even limit memory usage. DSL code cannot allocate 
operating-system-level resources like file handles of TCP sockets. And there is 
garbage collector as DSL in interpreted on top of Python.

How easy it would be to do the same with JavaScript/Lua/whatever?







On Sat, Mar 8, 2014 at 7:05 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
What are the benefits of MuranoPL over a already established language?

I noted the following and don't quite understand why they are needed 
(reinventing a language):

- https://wiki.openstack.org/wiki/Murano/DSL/Blueprint#Extends
- https://wiki.openstack.org/wiki/Murano/DSL/Blueprint#Block_constructs
Q: where are those looping constructs executed? How hard is it to DOS murano 
servers (submitting jobs that loop forever). What execution limits are imposed? 
I noted that the parallel construct actually exposes the number of green 
threads (isn't this an avenue for resource starvation?).
- https://wiki.openstack.org/wiki/Murano/DSL/Blueprint#Object_model

IMHO, something just doesn't seem right when the above is created, fitting a 
language into YAML seems about as awkward as creating a language in XML 
(xquery[1] for example) . Why was this really preferred over just python or 
something simpler for example, [lua, javascript…], that already has 
language/object constructs… built-in and have runtimes that u can control the 
security domain of (python is not a good choice to run arbitrary code-in, find 
out how much energy google put into python + app-engine and u'll see what it 
takes).

http://en.wikipedia.org/wiki/XQuery#Examples

From: Stan Lagun sla...@mirantis.commailto:sla

[openstack-dev] MuranoPL questions?

2014-03-08 Thread Joshua Harlow
To continue from other thread


Personally I believe that YAQL-based DSLs similar (but probably simlet than) 
MuranoPL can be of a great value for many OpenStack projects that have DSLs of 
different kind. Murano for App Catatalog, Mistral for workflows, Heat for HOT 
templates, Ceilometer for alarms  counter aggregation rules, Nova for 
customizable resource scheduling and probably many more.


Isn't python a better host language for said DSLs than muranoPL? I am still 
pretty weary of a DSL that starts to grow so many features to encompass other 
DSLs since then it's not really a DSL but a non-DSL, and we already have one 
that everyone is familiar with (python).

Are there any good examples if muranoPL that I can look at that take advantage 
of muranoPL classes, functions, namespaces which can be compared to there 
python equivalents. Has such an analysis/evaluation been done?

Sent from my really tiny device...
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-09 Thread Joshua Harlow
I'd be very interested in knowing the resource controls u plan to add. Memory, 
CPU...

I'm still trying to figure out where something like 
https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.demoApp.DemoInstance/manifest.yaml
 would be beneficial, why not just spend effort sand boxing lua, python... 
Instead of spending effort on creating a new language and then having to 
sandbox it as well... Especially if u picked languages that are made to be 
sandboxed from the start (not python)...

Who is going to train people on muranoPL, write language books and tutorials 
when the same amount of work has already been done for 10+ years for other 
languages

I fail to see where muranoPL is a DSL when it contains a full language already 
with functions, objects, namespaces, conditionals... (what is the domain of 
it?), maybe I'm just confused though (quite possible, haha).

Does this not seem awkward to anyone else??

Sent from my really tiny device...

On Mar 8, 2014, at 10:44 PM, Stan Lagun 
sla...@mirantis.commailto:sla...@mirantis.com wrote:

First of all MuranoPL is not a host but hosted language. It never intended to 
replace Python and if Python can do the job it is probably better than MuranoPL 
for that job.
The problem with Python is that you cannot have Python code as a part of your 
DSL if you need to evaluate that DSL on server-side. Using Python's eval() is 
not secure. And you don't have enough control on what evaled code is allowed to 
do. MuranoPL on the contrary are fully sandboxed. You have absolute control 
over what functions/methods/APIs are exposed to DSL and DSL code can do nothing 
except for what it allowed to do. Besides you typically do want your DSL to be 
domain-specific so general-purpose language like Python can be suboptimal.

I don't say MuranoPL is good for all projects. It has many Murano-specific 
things after all. In most cases you don't need all those OOP features that 
MuranoPL has. But the code organization, how it uses YAML, block structures and 
especially YAQL expressions can be of a great value to many projects

For examples of MuranoPL classes you can browse 
https://github.com/istalker2/MuranoDsl/tree/master/meta folder. This is my 
private repository that I was using to develop PoC for MuranoPL engine. We are 
on the way to create production-quality implementation with unit-tests etc. in 
regular Murano repositories on stackforge


On Sun, Mar 9, 2014 at 7:33 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
To continue from other thread


Personally I believe that YAQL-based DSLs similar (but probably simlet than) 
MuranoPL can be of a great value for many OpenStack projects that have DSLs of 
different kind. Murano for App Catatalog, Mistral for workflows, Heat for HOT 
templates, Ceilometer for alarms  counter aggregation rules, Nova for 
customizable resource scheduling and probably many more.


Isn't python a better host language for said DSLs than muranoPL? I am still 
pretty weary of a DSL that starts to grow so many features to encompass other 
DSLs since then it's not really a DSL but a non-DSL, and we already have one 
that everyone is familiar with (python).

Are there any good examples if muranoPL that I can look at that take advantage 
of muranoPL classes, functions, namespaces which can be compared to there 
python equivalents. Has such an analysis/evaluation been done?

Sent from my really tiny device...
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.comhttp://www.mirantis.com/
sla...@mirantis.commailto:sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Taskflow remote worker model ftw!

2014-03-10 Thread Joshua Harlow
Hi all,


I'd just like to let everyone know a new feature in taskflow (that I think will 
be benefical to various projects (reducing the duplication of similar code in 
various projects that accomplish the same feature set). The new feature is an 
ability to run tasks in remote-workers (the task transitions and state 
persistence is still done in an 'orchestrating' engine). This means that the 
engine no longer has to run tasks locally or in threads (or greenthreads) but 
can run tasks on remote machines (anything that can be connected to a MQ via 
kombu; TBD when this becomes oslo.messaging).


A simple example that might show how this works better for folks that have some 
time to try it out.


-

Pre-setup: git clone the taskflow repo and install it (in a venv or elsewhere), 
install a mq server (rabbitmq for example).

-


Lets now create two basic tasks (one that says hello and one that says goodbye).


class HelloWorldTask(task.Task):

default_provides = hi_happened


def execute(self):

LOG.info('hello world')

return time.time()


class GoodbyeWorldTask(task.Task):

default_provides = goodbye_happened


def execute(self, hi_happened):

LOG.info('goodbye world (hi said on %s)', hi_happened)

return time.time()


* Notice how the GoodbyeWorldTask requires an input 'hi_happened' (which is 
produced by the HelloWorldTask).


Now lets create a workflow that combines these two together.


f = linear_flow.Flow(hi-bye)

f.add(HelloWorldTask())

f.add(GoodbyeWorldTask())


Notice here that we have specified a linear runtime order (that is hello will 
be said before goodbye, this is also inherent in the dependency ordering since 
the goodbye task requires 'hi_happened' to run, and the only way to satisfy 
that dependency is to run the helloworld task before the goodbye task).


*  If you are wondering what the heck this is (or why it is useful to have 
these little task and flow classes) check out 
https://wiki.openstack.org/wiki/TaskFlow#Structure


Now the fun begins!


We need a worker to accept requests to run tasks on so lets create a small 
function that just does that.


def run_worker():

worker_conf = dict(MQ_CONF)

worker_conf.update({

# These are the available tasks that this worker has access to execute.

'tasks': [

HelloWorldTask,

GoodbyeWorldTask,

],

})


# Start this up and stop it on ctrl-c

worker = remote_worker.Worker(**worker_conf)

runner = threading.Thread(target=worker.run)

runner.start()

worker.wait()


while True:

try:

time.sleep(1)

except KeyboardInterrupt:

LOG.info(Dying...)

worker.stop()

runner.join()

break


And of course we need a function that will perform the orchestration of the 
remote (or local tasks), this function starts the whole execution flow by 
taking the workflow defined above and combining that workflow with an engine 
that will run the individual tasks (and transfer data between those tasks as 
needed).


* For those still wondering what an engine is (or what it offers) check out 
https://wiki.openstack.org/wiki/TaskFlow#Engines and 
https://wiki.openstack.org/wiki/TaskFlow/Patterns_and_Engines/Persistence#Big_Picture
 (which hopefully will make it easier to understand why the concept exists in 
the first place).


def run_engine():

# Make some remote tasks happen

f = lf.Flow(test)

f.add(HelloWorldTask())

f.add(GoodbyeWorldTask())


# Create a in-memory storage area where intermediate results will be

# saved (you can change this to a persistent backend if desired).

backend = impl_memory.MemoryBackend({})

_logbook, flowdetail = pu.temporary_flow_detail(backend)

engine_conf = dict(MQ_CONF)

engine_conf.update({

# This identifies what workers are accessible via what queues, this

# will be made better soon with reviews 
https://review.openstack.org/#/c/75094/

# or similar.

'workers_info': {

'work': [

HelloWorldTask().name,

GoodbyeWorldTask().name,

],

}

})


LOG.info(Running workflow %s, f)


# Now run the engine.

e = engine.WorkerBasedActionEngine(f, flowdetail, backend, engine_conf)

with logging_listener.LoggingListener(e, level=logging.INFO):

e.run()


# See the results recieved.

print(Final results: %s % (e.storage.fetch_all()))


Now once we have this two methods created we can actually start the worker and 
the engine and watch the action happen. To do this without having to apply a 
little more boilerplate (imports and such) the code above can be found at 
http://paste.openstack.org/show/73071/.


To start a worker just do the following. Download the above paste to a file 
named 'test.py' (and then modify the MQ_SERVER global to point 

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Joshua Harlow
Sounds like a good idea to me.

I've never understood why we treat the DB as a LOG (keeping deleted == 0 
records around) when we should just use a LOG (or similar system) to begin with 
instead.

Does anyone use the feature of switching deleted == 1 back to deleted = 0? Has 
this worked out for u?

Seems like some of the feedback on 
https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests that 
this has been a operational pain-point for folks (Tool to delete things 
properly suggestions and such…).

From: Boris Pavlovic bpavlo...@mirantis.commailto:bpavlo...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 10, 2014 at 1:29 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Victor Sergeyev vserge...@mirantis.commailto:vserge...@mirantis.com
Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
deletion (step by step)

Hi stackers,

(It's proposal for Juno.)

Intro:

Soft deletion means that records from DB are not actually deleted, they are 
just marked as a deleted. To mark record as a deleted we put in special 
table's column deleted record's ID value.

Issue 1: Indexes  Queries
We have to add in every query AND deleted == 0 to get non-deleted records.
It produce performance issue, cause we should add it in any index one extra 
column.
As well it produce extra complexity in db migrations and building queries.

Issue 2: Unique constraints
Why we store ID in deleted and not True/False?
The reason is that we would like to be able to create real DB unique 
constraints and avoid race conditions on insert operation.

Sample: we Have table (id, name, password, deleted) we would like to put in 
column name only unique value.

Approach without UC: if count(`select  where name = name`) == 0: insert(...)
(race cause we are able to add new record between )

Approach with UC: try: insert(...) except Duplicate: ...

So to add UC we have to add them on (name, deleted). (to be able to make 
insert/delete/insert with same name)

As well it produce performance issues, because we have to use Complex unique 
constraints on 2  or more columns. + extra code  complexity in db migrations.

Issue 3: Garbage collector

It is really hard to make garbage collector that will have good performance and 
be enough common to work in any case for any project.
Without garbage collector DevOps have to cleanup records by hand, (risk to 
break something). If they don't cleanup DB they will get very soon performance 
issue.

To put in a nutshell most important issues:
1) Extra complexity to each select query  extra column in each index
2) Extra column in each Unique Constraint (worse performance)
3) 2 Extra column in each table: (deleted, deleted_at)
4) Common garbage collector is required


To resolve all these issues we should just remove soft deletion.

One of approaches that I see is in step by step removing deleted column from 
every table with probably code refactoring.  Actually we have 3 different cases:

1) We don't use soft deleted records:
1.1) Do .delete() instead of .soft_delete()
1.2) Change query to avoid adding extra deleted == 0 to each query
1.3) Drop deleted and deleted_at columns

2) We use soft deleted records for internal stuff e.g. periodic tasks
2.1) Refactor code somehow: E.g. store all required data by periodic task in 
some special table that has: (id, type, json_data) columns
2.2) On delete add record to this table
2.3-5) similar to 1.1, 1.2, 13

3) We use soft deleted records in API
3.1) Deprecated API call if it is possible
3.2) Make proxy call to ceilometer from API
3.3) On .delete() store info about records in (ceilometer, or somewhere else)
3.4-6) similar to 1.1, 1.2, 1.3

This is not ready RoadMap, just base thoughts to start the constructive 
discussion in the mailing list, so %stacker% your opinion is very important!


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taskflow remote worker model ftw!

2014-03-10 Thread Joshua Harlow
No dependency on celery,

Currently it's not using oslo.messaging yet since oslo.messaging is still
dependent on py2.6,py2.7 (it also wasn't released to pypi when this work
started). Taskflow is trying to retain py3.3 compatibility (as all new
libraries should) so bringing oslo.messaging[1] in at the current time
would reduce what taskflow can run with to py2.6,py2.7.

Of course this is getting better as we speak (and afaik is a
work-in-progress to fix this). As oslo.messaging matures I'm all for using
it (and depreciating the work here done with kombu). Until then, we can
think of making the backend more pluggable then it already is (having a
kombu supported one and a oslo.messaging supported one) if we need to
(hopefully the oslo.messaging py3.3 work should finish quickly?).

[1] https://wiki.openstack.org/wiki/Python3#Dependencies

-Josh

-Original Message-
From: Zane Bitter zbit...@redhat.com
Organization: Red Hat
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Monday, March 10, 2014 at 2:10 PM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Taskflow remote worker model ftw!

Cool, nice work Josh  team!

On 10/03/14 17:23, Joshua Harlow wrote:
 This means that the engine no longer has to run tasks locally or in
 threads (or greenthreads) but can run tasks on remote machines (anything
 that can be connected to a MQ via kombu; TBD when this becomes
 oslo.messaging).

Does that reflect a dependency on Celery? Not using oslo.messaging makes
it a non-starter for OpenStack IMO, so this would be a very high
priority for adoption.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   9   10   >