Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-23 Thread Doug Hellmann

On Nov 22, 2014, at 5:01 PM, Jeremy Stanley  wrote:

> On 2014-11-22 19:45:09 +1300 (+1300), Robert Collins wrote:
>> Given the persistent risks of downgrade attacks, I think this does
>> actually qualify as a security issue: not that its breaking, but
>> that SSLv3 is advertised and accepted anywhere.
> 
> Which downgrade attacks? Outside of Web browser authors deciding it
> was a good idea to bypass the normal TLS negotiation mechanism, as
> long as both ends _support_ TLS then causing a downgrade within TLS
> version negotiation to SSLv3 or earlier should not be possible. If
> you're suggesting we strengthen against unknown future attacks,
> that's a fine idea and is something we call "security hardening"
> (not a vulnerability fix).
> 
>> The lines two lower:
>> 
>>try:
>>_SSL_PROTOCOLS["sslv2"] = ssl.PROTOCOL_SSLv2
>>except AttributeError:
>>pass
>> 
>> Are even more concerning!
> 
> _SSL_PROTOCOLS is only used in sslutils.validate_ssl_version() which
> is in turn used by a method in rpc.impl_kombu. Checking *all*
> current branches of *all* official OpenStack projects in our Gerrit,
> the only way it's called is when the kombu RPC backend is in use and
> kombu_ssl_version is set in a configuration file. It will *allow*
> explicit selection of insecure SSL versions (SSLv3 and SSLv2) by the
> administrator--this isn't a "magically uses insecure protocols
> without telling you" situation--merely providing the option to
> configure use of a specific insecure protocol. (You can also
> configure it not to use any encryption at all for that matter.)
> 
> I'm all for dropping this nonsense completely in master and also
> backporting a patch to make this not spontaneously vomit when run on
> platforms where SSLv3 is no longer available (perhaps something
> similar to the SSLv2 try/except example above), but we shouldn't
> backport a patch which suddenly breaks someone's cloud because they
> made a conscious decision to configure it to use SSLv3 for RPC
> communication. Visibly documenting (in the Security Guide or an
> OSSN) that you should configure your RPC communication to use TLS
> instead of SSLv2/3 is of course a great idea too.

It appears the option isn’t required, and that the default does what we would 
want as far as negotiating the best possible protocol. The only place things 
will be breaking is on the version of Python shipped by Debian where the 
constant used to set up the validation logic is no longer present in the SSL 
library. Let’s start by making the smallest change we can to fix that problem, 
and then move on.

As has been proposed, we can fix the Debian Python 2.7 issue by treating sslv3 
as sslv2 is handled in the code snippet above. That’s an easy patch for us to 
land, and I hope Thomas will update the patch he has already submitted based on 
feedback on that review.

After the short-term fix is merged, we can investigate deprecating the 
configuration option entirely. That change will require more work because we 
will be removing a configuration option. Someone will have to do the 
archaeology needed to understand why the option was added in the first place, 
so we don’t unwittingly risk breaking existing deployments. Was it added for 
completeness (because kombu supports it)? Was it added because some combination 
of kombu and rabbit needed the client to specify the setting because 
negotiation wasn’t implemented properly? Was there some other reason?

Doug

> 
> My point is that suggesting there's a vulnerability here without
> looking at how the code is used is sort of like shouting "fire" in a
> crowded theater.
> 
>> That said, code like:
>> https://github.com/mpaladin/python-amqpclt/blob/master/amqpclt/kombu.py#L101
>> is truely egregious!
> 
> Yikes... glad I'm not on _their_ VMT instead!
> -- 
> Jeremy Stanley
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Admin and public VersionV3 endpoints

2014-11-23 Thread hashmap
Hello,

while working on a bug 'Keystone API GET 5000/v3 returns wrong endpoint
URL in response body'
https://bugs.launchpad.net/keystone/+bug/1381961

I found a design solution which I need to understand better to fix this
bug. I'd appreciate the community help.

In service.py
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/service.py#n114
we create 2 identical apps which are both deployed on admin and public
ports (interfaces).

# Add in the v3 version api
sub_routers.append(routers.VersionV3('admin', _routers))
sub_routers.append(routers.VersionV3('public', _routers))

>From my experience the first one always handles all
requests to the VersionV3 app. This is why the admin endpoint URL is
returned regardless of the request URL.

We can see it only if 'admin_endpoint' is set in keystone.conf.
base_url from wsgi.py returns context['host_url'] and only if
public_endpoint or admin_endpoint is set (in our case admin app always
handles requests so public_endpoint is irelevant) returns URL from
keystone.conf

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/wsgi.py#n355

So I have 2 questions:

* Why do we need to have this setup? I saw in docs a remark about
'historical reasons' which I don't understand. Anyway the second
VersionV3 has no chance to handle any requests (perhaps I missed
something?).

* Why do we need admin_endpoint, public_endpoint settings in
keystone.conf? There is a comment "You should only need to set this
value if the base URL contains a path (e.g. /prefix/v2.0) or the
endpoint should be found on a different server." The first point might
be addressed by filly reconstruct the request URL by using
context['environment']. Could somebody explain the second one?

Thanks!
Alexey.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-23 Thread Robert Collins
On 23 November 2014 at 11:01, Jeremy Stanley  wrote:
> On 2014-11-22 19:45:09 +1300 (+1300), Robert Collins wrote:
>> Given the persistent risks of downgrade attacks, I think this does
>> actually qualify as a security issue: not that its breaking, but
>> that SSLv3 is advertised and accepted anywhere.
>
> Which downgrade attacks? Outside of Web browser authors deciding it
> was a good idea to bypass the normal TLS negotiation mechanism, as
> long as both ends _support_ TLS then causing a downgrade within TLS
> version negotiation to SSLv3 or earlier should not be possible. If

Thats my understanding too; while this code is targeted for kombu use,
I remain paranoid.

> you're suggesting we strengthen against unknown future attacks,
> that's a fine idea and is something we call "security hardening"
> (not a vulnerability fix).

Fair enough.

> My point is that suggesting there's a vulnerability here without
> looking at how the code is used is sort of like shouting "fire" in a
> crowded theater.

Point taken. Sorry :)

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread victor stinner
Hi,

I'm happy to announce you that I just finished the last piece of the puzzle to 
add support for trollius coroutines in Oslo Messaging! See my two changes:

* Add a new aiogreen executor:
  https://review.openstack.org/#/c/136653/
* Add an optional executor callback to dispatcher:
  https://review.openstack.org/#/c/136652/

Related projects:

* asyncio is an event loop which is now part of Python 3.4:
  http://docs.python.org/dev/library/asyncio.html
* trollius is the port of the new asyncio module to Python 2:
  http://trollius.readthedocs.org/
* aiogreen implements the asyncio API on top of eventlet:
  http://aiogreen.readthedocs.org/

For the long story and the full history of my work on asyncio in OpenStack 
since one year, read:
http://aiogreen.readthedocs.org/openstack.html

The last piece of the puzzle is the new aiogreen project that I released a few 
days ago. aiogreen is well integrated and fully compatible with eventlet, it 
can be used in OpenStack without having to modify code. It is almost fully 
based on trollius, it just has a small glue to reuse eventlet event loop (get 
read/write notifications of file descriptors).

In the past, I tried to use the greenio project, which also implements the 
asyncio API, but it didn't fit well with eventlet. That's why I wrote a new 
project.

Supporting trollius coroutines in Oslo Messaging is just the first part of the 
global project. Here is my full plan to replace eventlet with asyncio.


First part (in progress): add support for trollius coroutines
-

Prepare OpenStack (Oslo Messaging) to support trollius coroutines using
``yield``: explicit asynchronous programming. Eventlet is still supported,
used by default, and applications and libraries don't need to be modified at
this point.

Already done:

* Write the trollius project: port asyncio to Python 2
* Stabilize trollius API
* Add trollius dependency to OpenStack
* Write the aiogreen project to provide the asyncio API on top of eventlet

To do:

* Stabilize aiogreen API
* Add aiogreen dependency to OpenStack
* Write an aiogreen executor for Oslo Messaging: rewrite greenio executor
  to replace greenio with aiogreen


Second part (to do): rewrite code as trollius coroutines


Switch from implicit asynchronous programming (eventlet using greenthreads) to
explicit asynchronous programming (trollius coroutines using ``yield``). Need
to modify OpenStack Common Libraries and applications. Modifications can be
done step by step, the switch will take more than 6 months.

The first application candidate is Ceilometer. The Ceilometer project is young,
developers are aware of eventlet issues and like Python 3, and Ceilometer don't
rely so much on asynchronous programming: most time is spent into waiting the
database anyway.

The goal is to port Ceilometer to explicit asynchronous programming during the
cycle of OpenStack Kilo.

Some applications may continue to use implicit asynchronous programming. For
example, nova is probably the most complex part beacuse it is and old project
with a lot of legacy code, it has many drivers and the code base is large.

To do:

* Ceilometer: add trollius dependency and set the trollius event loop policy to
  aiogreen
* Ceilometer: change Oslo Messaging executor from "eventlet" to "aiogreen"
* Redesign the service class of Oslo Incubator to support aiogreen and/or
  trollius.  Currently, the class is designed for eventlet. The service class
  is instanciated before forking, which requires hacks on eventlet to update
  file descriptors.
* In Ceilometer and its OpenStack depedencencies: add new functions which
  are written with explicit asynchronous programming in mind (ex: trollius
  coroutines written with ``yield``).
* Rewrite Ceilometer endpoints (RPC methods) as trollius coroutines.

Questions:

* What about WSGI? aiohttp is not compatible with trollius yet.
* The quantity of code which need to be ported to asynchronous programming is
  unknown right now.
* We should be prepared to see deadlocks. OpenStack was designed for eventlet
  which implicitly switch on blocking operations. Critical sections may not be
  protected with locks, or not the right kind of lock.
* For performances, blocking operations can be executed in threads. OpenStack
  code is probably not thread-safe, which means new kinds of race conditions.
  But the code executed in threads will be explicitly scheduled to be executed
  in a thread (with ``loop.run_in_executor()``), so regressions can be easily
  identified.
* This part will take a lot of time. We may need to split it into subparts
  to have milestones, which is more attractive for developers.


Last part (to do): drop eventlet


Replace aiogreen event loop with trollius event loop, drop aiogreen and drop
eventlet at the end.

This change will be done on applications one by one. This is no need t

Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Robert Collins
On 24 November 2014 at 11:01, victor stinner
 wrote:
> Hi,
>
> I'm happy to announce you that I just finished the last piece of the puzzle 
> to add support for trollius coroutines in Oslo Messaging! See my two changes:
>
> * Add a new aiogreen executor:
>   https://review.openstack.org/#/c/136653/
> * Add an optional executor callback to dispatcher:
>   https://review.openstack.org/#/c/136652/
>
> Related projects:
>
> * asyncio is an event loop which is now part of Python 3.4:
>   http://docs.python.org/dev/library/asyncio.html
> * trollius is the port of the new asyncio module to Python 2:
>   http://trollius.readthedocs.org/
> * aiogreen implements the asyncio API on top of eventlet:
>   http://aiogreen.readthedocs.org/
>
> For the long story and the full history of my work on asyncio in OpenStack 
> since one year, read:
> http://aiogreen.readthedocs.org/openstack.html
>
> The last piece of the puzzle is the new aiogreen project that I released a 
> few days ago. aiogreen is well integrated and fully compatible with eventlet, 
> it can be used in OpenStack without having to modify code. It is almost fully 
> based on trollius, it just has a small glue to reuse eventlet event loop (get 
> read/write notifications of file descriptors).
>
> In the past, I tried to use the greenio project, which also implements the 
> asyncio API, but it didn't fit well with eventlet. That's why I wrote a new 
> project.
>
> Supporting trollius coroutines in Oslo Messaging is just the first part of 
> the global project. Here is my full plan to replace eventlet with asyncio.

...

So - the technical bits of the plan sound fine.

On WSGI - if we're in an asyncio world, I don't think WSGI has any
relevance today - it has no async programming model. While is has
incremental apis and supports generators, thats not close enough to
the same thing: so we're going to have to port our glue code to
whatever container we end up with. As you know I'm pushing on a revamp
of WSGI right now, and I'd be delighted to help put together a
WSGI-for-asyncio PEP, but I think its best thought of as a separate
thing to WSGI per se. It might be a profile of WSGI2 though, since
there is quite some interest in truely async models.

However I've a bigger picture concern. OpenStack only relatively
recently switched away from an explicit async model (Twisted) to
eventlet.

I'm worried that this is switching back to something we switched away
from (in that Twisted and asyncio have much more in common than either
Twisted and eventlet w/magic, or asyncio and eventlet w/magic).

If Twisted was unacceptable to the community, what makes asyncio
acceptable? [Note, I don't really understand why Twisted was moved
away from, since our problem domain is such a great fit for reactor
style programming - lots of networking, lots of calling of processes
that may take some time to complete their work, and occasional DB
calls [which are equally problematic in eventlet and in
asyncio/Twisted]. So I'm not arguing against the move, I'm just
concerned that doing it without addressing whatever the underlying
thing was, will fail - and I'm also concerned that it will surprise
folk - since there doesn't seem to be a cross project blueprint
talking about this fairly fundamental shift in programming model.

-Rob

-- 
Robert Collins 
Distinguished Technologistste
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Monty Taylor
On 11/23/2014 06:13 PM, Robert Collins wrote:
> On 24 November 2014 at 11:01, victor stinner
>  wrote:
>> Hi,
>>
>> I'm happy to announce you that I just finished the last piece of the puzzle 
>> to add support for trollius coroutines in Oslo Messaging! See my two changes:
>>
>> * Add a new aiogreen executor:
>>   https://review.openstack.org/#/c/136653/
>> * Add an optional executor callback to dispatcher:
>>   https://review.openstack.org/#/c/136652/
>>
>> Related projects:
>>
>> * asyncio is an event loop which is now part of Python 3.4:
>>   http://docs.python.org/dev/library/asyncio.html
>> * trollius is the port of the new asyncio module to Python 2:
>>   http://trollius.readthedocs.org/
>> * aiogreen implements the asyncio API on top of eventlet:
>>   http://aiogreen.readthedocs.org/
>>
>> For the long story and the full history of my work on asyncio in OpenStack 
>> since one year, read:
>> http://aiogreen.readthedocs.org/openstack.html
>>
>> The last piece of the puzzle is the new aiogreen project that I released a 
>> few days ago. aiogreen is well integrated and fully compatible with 
>> eventlet, it can be used in OpenStack without having to modify code. It is 
>> almost fully based on trollius, it just has a small glue to reuse eventlet 
>> event loop (get read/write notifications of file descriptors).
>>
>> In the past, I tried to use the greenio project, which also implements the 
>> asyncio API, but it didn't fit well with eventlet. That's why I wrote a new 
>> project.
>>
>> Supporting trollius coroutines in Oslo Messaging is just the first part of 
>> the global project. Here is my full plan to replace eventlet with asyncio.
> 
> ...
> 
> So - the technical bits of the plan sound fine.
> 
> On WSGI - if we're in an asyncio world, I don't think WSGI has any
> relevance today - it has no async programming model. While is has
> incremental apis and supports generators, thats not close enough to
> the same thing: so we're going to have to port our glue code to
> whatever container we end up with. As you know I'm pushing on a revamp
> of WSGI right now, and I'd be delighted to help put together a
> WSGI-for-asyncio PEP, but I think its best thought of as a separate
> thing to WSGI per se. It might be a profile of WSGI2 though, since
> there is quite some interest in truely async models.
> 
> However I've a bigger picture concern. OpenStack only relatively
> recently switched away from an explicit async model (Twisted) to
> eventlet.
> 
> I'm worried that this is switching back to something we switched away
> from (in that Twisted and asyncio have much more in common than either
> Twisted and eventlet w/magic, or asyncio and eventlet w/magic).
> 
> If Twisted was unacceptable to the community, what makes asyncio
> acceptable? [Note, I don't really understand why Twisted was moved
> away from, since our problem domain is such a great fit for reactor
> style programming - lots of networking, lots of calling of processes
> that may take some time to complete their work, and occasional DB
> calls [which are equally problematic in eventlet and in
> asyncio/Twisted]. So I'm not arguing against the move, I'm just
> concerned that doing it without addressing whatever the underlying
> thing was, will fail - and I'm also concerned that it will surprise
> folk - since there doesn't seem to be a cross project blueprint
> talking about this fairly fundamental shift in programming model.

I'm not going to comment on the pros and cons - I think we all know I'm
a fan of threads. But I have been around a while, so - for those who
haven't been:

When we started the project, nova used twisted and swift used eventlet.
As we've consistently endeavored to not have multiple frameworks, we
entered in to the project's first big flame war:

"twisted vs. eventlet"

It was _real_ fun, I promise. But a the heart was a question of whether
we were going to rewrite swift in twisted or rewrite nova in eventlet.

The main 'winning' answer came down to twisted being very opaque for new
devs - while it's very powerful for experienced devs, we decided to opt
for eventlet which does not scare new devs with a completely different
programming model. (reactors and deferreds and whatnot)

Now, I wouldn't say we _just_ ported from Twisted, I think we finished
that work about 4 years ago. :)

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

> On Nov 23, 2014, at 6:30 PM, Monty Taylor  wrote:
> 
> On 11/23/2014 06:13 PM, Robert Collins wrote:
>> On 24 November 2014 at 11:01, victor stinner
>>  wrote:
>>> Hi,
>>> 
>>> I'm happy to announce you that I just finished the last piece of the puzzle 
>>> to add support for trollius coroutines in Oslo Messaging! See my two 
>>> changes:
>>> 
>>> * Add a new aiogreen executor:
>>>  https://review.openstack.org/#/c/136653/
>>> * Add an optional executor callback to dispatcher:
>>>  https://review.openstack.org/#/c/136652/
>>> 
>>> Related projects:
>>> 
>>> * asyncio is an event loop which is now part of Python 3.4:
>>>  http://docs.python.org/dev/library/asyncio.html
>>> * trollius is the port of the new asyncio module to Python 2:
>>>  http://trollius.readthedocs.org/
>>> * aiogreen implements the asyncio API on top of eventlet:
>>>  http://aiogreen.readthedocs.org/
>>> 
>>> For the long story and the full history of my work on asyncio in OpenStack 
>>> since one year, read:
>>> http://aiogreen.readthedocs.org/openstack.html
>>> 
>>> The last piece of the puzzle is the new aiogreen project that I released a 
>>> few days ago. aiogreen is well integrated and fully compatible with 
>>> eventlet, it can be used in OpenStack without having to modify code. It is 
>>> almost fully based on trollius, it just has a small glue to reuse eventlet 
>>> event loop (get read/write notifications of file descriptors).
>>> 
>>> In the past, I tried to use the greenio project, which also implements the 
>>> asyncio API, but it didn't fit well with eventlet. That's why I wrote a new 
>>> project.
>>> 
>>> Supporting trollius coroutines in Oslo Messaging is just the first part of 
>>> the global project. Here is my full plan to replace eventlet with asyncio.
>> 
>> ...
>> 
>> So - the technical bits of the plan sound fine.
>> 
>> On WSGI - if we're in an asyncio world, I don't think WSGI has any
>> relevance today - it has no async programming model. While is has
>> incremental apis and supports generators, thats not close enough to
>> the same thing: so we're going to have to port our glue code to
>> whatever container we end up with. As you know I'm pushing on a revamp
>> of WSGI right now, and I'd be delighted to help put together a
>> WSGI-for-asyncio PEP, but I think its best thought of as a separate
>> thing to WSGI per se. It might be a profile of WSGI2 though, since
>> there is quite some interest in truely async models.
>> 
>> However I've a bigger picture concern. OpenStack only relatively
>> recently switched away from an explicit async model (Twisted) to
>> eventlet.
>> 
>> I'm worried that this is switching back to something we switched away
>> from (in that Twisted and asyncio have much more in common than either
>> Twisted and eventlet w/magic, or asyncio and eventlet w/magic).
>> 
>> If Twisted was unacceptable to the community, what makes asyncio
>> acceptable? [Note, I don't really understand why Twisted was moved
>> away from, since our problem domain is such a great fit for reactor
>> style programming - lots of networking, lots of calling of processes
>> that may take some time to complete their work, and occasional DB
>> calls [which are equally problematic in eventlet and in
>> asyncio/Twisted]. So I'm not arguing against the move, I'm just
>> concerned that doing it without addressing whatever the underlying
>> thing was, will fail - and I'm also concerned that it will surprise
>> folk - since there doesn't seem to be a cross project blueprint
>> talking about this fairly fundamental shift in programming model.
> 
> I'm not going to comment on the pros and cons - I think we all know I'm
> a fan of threads. But I have been around a while, so - for those who
> haven't been:
> 
> When we started the project, nova used twisted and swift used eventlet.
> As we've consistently endeavored to not have multiple frameworks, we
> entered in to the project's first big flame war:
> 
> "twisted vs. eventlet"
> 
> It was _real_ fun, I promise. But a the heart was a question of whether
> we were going to rewrite swift in twisted or rewrite nova in eventlet.
> 
> The main 'winning' answer came down to twisted being very opaque for new
> devs - while it's very powerful for experienced devs, we decided to opt
> for eventlet which does not scare new devs with a completely different
> programming model. (reactors and deferreds and whatnot)
> 
> Now, I wouldn't say we _just_ ported from Twisted, I think we finished
> that work about 4 years ago. :)
> 

For whatever it’s worth, I find explicit async io to be _way_ easier to
understand for the same reason I find threaded code to be a rats nest.

The co-routine style of asyncio (or Twisted’s inlineCallbacks) solves
almost all of the problems that I think most people have with explicit
asyncio (namely the callback hell) while still getting the benefits.

Glyph wrote a good post that mirrors my opinions on implicit vs explicit
here: https://glyph.twistedmatrix.com/2014/02/u

Re: [openstack-dev] [Fuel] Separate code freeze for repos

2014-11-23 Thread Dmitry Borodaenko
1. We discussed splitting fuel-web, I think we should do that before
relaxing code freeze rules for it.

2. If there are high or critical priority bugs in a component during soft
code freeze, all developers of that component should be writing, reviewing,
or testing fixes for these bugs.

3. If we do separate code freeze for current components, we should always
start with fuel-main, so that we can switch repos from master to stable one
at a time.
On Nov 17, 2014 4:08 AM, "Mike Scherbakov"  wrote:

> I believe that we need to do this, and agree with Vitaly.
>
> Basically, when we are getting low amount of review requests, it's easy
> enough to do backports to stable branch. So criteria should be based on
> this, and I believe it can be even more soft, than what Vitaly suggests.
>
> I suggest the following:
> ___
> If no more than 3 new High / Critical priority bugs appeared in the passed
> day, and no more than 10 High/Critical over the past 3 days appeared - then
> stable branch can be created. ___
>
> HCF criteria remain the same. We will just have stable branch earlier. It
> might be a bit of headache for our DevOps team: it means that
>
>- 6.1 ISO should appear immediately after first stable branch created
>(we need ISO and all set of tests working on master)
>- 6.0 ISO has to be build on master branches from some repos, but
>stable/6.0 from other. Likely it means whether switching to stable/6.0 in
>fuel-main and hacking config.mk, or something else.
>
> DevOps team, what do you think?
>
>
> On Fri, Nov 14, 2014 at 5:24 PM, Vitaly Kramskikh  > wrote:
>
>> There is a proposal to consider a repo as stable if there are no
>> high/critical bugs and there were no such new bugs with this priority for
>> the last 3 days. I'm ok with it.
>>
>> 2014-11-14 17:16 GMT+03:00 Igor Kalnitsky :
>>
>>> Guys,
>>>
>>> The idea of separate unfreezing is cool itself, but we have to define
>>> some rules how to define that fuel-web is stable. I mean, in fuel-web
>>> we have different projects, so when Fuel UI is stable, the
>>> fuel_upgrade or Nailgun may be not.
>>>
>>> - Igor
>>>
>>> On Fri, Nov 14, 2014 at 3:52 PM, Vitaly Kramskikh
>>>  wrote:
>>> > Evgeniy,
>>> >
>>> > That means that the stable branch can be created for some repos
>>> earlier. For
>>> > example, fuel-web repo seems not to have critical issues for now and
>>> I'd
>>> > like master branch of that repo to be opened for merging various stuff
>>> which
>>> > shouldn't go to 6.0 and do not wait until all other repos stabilize.
>>> >
>>> > 2014-11-14 16:42 GMT+03:00 Evgeniy L :
>>> >>
>>> >> Hi,
>>> >>
>>> >> >> There was an idea to make a separate code freeze for repos
>>> >>
>>> >> Could you please clarify what do you mean?
>>> >>
>>> >> I think we should have a way to merge patches for the next
>>> >> release event if it's code freeze for the current.
>>> >>
>>> >> Thanks,
>>> >>
>>> >> On Tue, Nov 11, 2014 at 2:16 PM, Vitaly Kramskikh
>>> >>  wrote:
>>> >>>
>>> >>> Folks,
>>> >>>
>>> >>> There was an idea to make a separate code freeze for repos, but we
>>> >>> decided not to do it. Do we plan to try it this time? It is really
>>> painful
>>> >>> to maintain multi-level tree of dependent review requests and wait
>>> for a few
>>> >>> weeks until we can merge new stuff in master.
>>> >>>
>>> >>> --
>>> >>> Vitaly Kramskikh,
>>> >>> Software Engineer,
>>> >>> Mirantis, Inc.
>>> >>>
>>> >>> ___
>>> >>> OpenStack-dev mailing list
>>> >>> OpenStack-dev@lists.openstack.org
>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>>
>>> >>
>>> >>
>>> >> ___
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev@lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Vitaly Kramskikh,
>>> > Software Engineer,
>>> > Mirantis, Inc.
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Vitaly Kramskikh,
>> Software Engineer,
>> Mirantis, Inc.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Mike Scherbakov
> #mihgen
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bi

Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Robert Collins
On 24 November 2014 at 12:35, Donald Stufft  wrote:
>
> For whatever it’s worth, I find explicit async io to be _way_ easier to
> understand for the same reason I find threaded code to be a rats nest.
>
> The co-routine style of asyncio (or Twisted’s inlineCallbacks) solves
> almost all of the problems that I think most people have with explicit
> asyncio (namely the callback hell) while still getting the benefits.

Sure. Note that OpenStack *was* using inlineCallbacks.

> Glyph wrote a good post that mirrors my opinions on implicit vs explicit
> here: https://glyph.twistedmatrix.com/2014/02/unyielding.html.

That is, we chose
"
4. and finally, implicit coroutines: Java’s “green threads”, Twisted’s
Corotwine, eventlet, gevent, where any function may switch the entire
stack of the current thread of control by calling a function which
suspends it.
"

- the option that Glyph (and I too) would say to never ever choose.

My concern isn't that asyncio is bad - its not. Its that we spent an
awful lot of time and effort rewriting nova etc to be 'option 4', and
we've no reason to believe that whatever it was that made that not
work /for us/ has been fixed.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Robert Collins
On 24 November 2014 at 12:30, Monty Taylor  wrote:

> I'm not going to comment on the pros and cons - I think we all know I'm
> a fan of threads. But I have been around a while, so - for those who
> haven't been:

FWIW we have *threads* today as a programming model. The
implementation is green, but the concepts we work with in the code are
threads, threadpools and so forth.

eventlet is an optimisation around some [minor] inefficiencies in
Python, but it doesn't change the programming model - see dstuffts
excellent link for details on that.

I too will hold off from commentting on the pros and cons today; this
isn't about good or bad, its about making sure this revisiting of a
huge discussion and effort gets the right visibility.



> The main 'winning' answer came down to twisted being very opaque for new
> devs - while it's very powerful for experienced devs, we decided to opt
> for eventlet which does not scare new devs with a completely different
> programming model. (reactors and deferreds and whatnot)
>
> Now, I wouldn't say we _just_ ported from Twisted, I think we finished
> that work about 4 years ago. :)

Nova managed it in Jan 2011, so 3.5 mumblemumble. Near enough to 'just' :)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Mike Bayer

> On Nov 23, 2014, at 6:13 PM, Robert Collins  wrote:
> 
> 
> So - the technical bits of the plan sound fine.

> 
> On WSGI - if we're in an asyncio world,

*looks around*, we are?   when did that happen?Assuming we’re talking 
explicit async. Rewriting all our code as verbose, “inside out” code, vast 
library incompatibility, and…some notion of “correctness” that somehow is 
supposed to be appropriate for a high level scripting language and can’t be 
achieved though simple, automated means such as gevent.

> I don't think WSGI has any
> relevance today -

if you want async + wsgi, use gevent.wsgi.   It is of course not explicit 
async but if the whole world decides that we all have to explicitly turn all of 
our code inside out to appease the concept of “oh no, IO IS ABOUT TO HAPPEN! 
ARE WE READY! ”,  I am definitely quitting programming to become a cheese 
maker.   If you’re writing some high performance TCP server thing, fine 
(…but... why are you writing a high performance server in Python and not 
something more appropriate like Go?).  If we’re dealing with message queues as 
I know this thread is about, fine.

But if you’re writing “receive a request, load some data, change some of it 
around, store it again, and return a result”, I don’t see why this has to be 
intentionally complicated.   Use implicit async that can interact with the 
explicit async messaging stuff appropriately.   That’s purportedly one of the 
goals of asyncIO (which Nick Coghlan had to lobby pretty hard for; source: 
http://python-notes.curiousefficiency.org/en/latest/pep_ideas/async_programming.html#gevent-and-pep-3156
  ).

> it has no async programming model.

neither do a *lot* of things, including all traditional ORMs.I’m fine with 
Ceilometer dropping SQLAlchemy support as they prefer MongoDB and their 
relational database code is fairly wanting.   Per 
http://aiogreen.readthedocs.org/openstack.html, I’m not sure how else they will 
drop eventlet support throughout the entire app.   


> While is has
> incremental apis and supports generators, thats not close enough to
> the same thing: so we're going to have to port our glue code to
> whatever container we end up with. As you know I'm pushing on a revamp
> of WSGI right now, and I'd be delighted to help put together a
> WSGI-for-asyncio PEP, but I think its best thought of as a separate
> thing to WSGI per se.

given the push for explicit async, seems like lots of effort will need to be 
spent on this. 

> It might be a profile of WSGI2 though, since
> there is quite some interest in truely async models.
> 
> However I've a bigger picture concern. OpenStack only relatively
> recently switched away from an explicit async model (Twisted) to
> eventlet.

hooray.   efficient database access for explicit async code would be impossible 
otherwise as there are no explicit async APIs to MySQL, and only one for 
Postgresql which is extremely difficult to support.

> 
> I'm worried that this is switching back to something we switched away
> from (in that Twisted and asyncio have much more in common than either
> Twisted and eventlet w/magic, or asyncio and eventlet w/magic).

In the C programming world, when you want to do something as simple as create a 
list of records, it’s not so simple: you have to explicitly declare memory 
using malloc(), and organize your program skillfully and carefully such that 
this memory is ultimately freed using free().   It’s tedious and error prone.   
So in the scripting language world, these tedious, low level and entirely 
predictable steps are automated away for us; memory is declared automatically, 
and freed automatically.  Even reference cycles are cleaned out for us without 
us even being aware.  This is why we use “scripting languages” - they are 
intentionally automated to speed the pace of development and produce code that 
is far less verbose than low-level C code and much less prone to low-level 
errors, albeit considerably less efficient.   It’s the payoff we make; 
predictable bookkeeping of the system’s resources are automated away.
There’s a price; the Python interpreter uses a ton of memory and tends to not 
free memory once large chunks of it have been used by the application.   The 
implicit allocation and freeing of memory has a huge tradeoff, in that the 
Python interpreter uses lots of memory pretty quickly.  However, this tradeoff, 
Python’s clearly inefficient use of memory because it’s automating the 
management of it away for us, is one which nobody seems to mind at all.   

But when it comes to IO, the implicit allocation of IO and deferment of 
execution done by gevent has no side effect anywhere near as harmful as the 
Python interpreter’s huge memory consumption.  Yet we are so afraid of it, so 
frightened that our code…written in a *high level scripting language*, might 
not be “correct”.  We might not know that IO is about to happen!   How is this 
different from the much more tangible and day-

Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Mike Bayer

> On Nov 23, 2014, at 6:35 PM, Donald Stufft  wrote:
> 
> 
> For whatever it’s worth, I find explicit async io to be _way_ easier to
> understand for the same reason I find threaded code to be a rats nest.

web applications aren’t explicitly “threaded”.   You get a request, load some 
data, manipulate it, and return a response.   There are no threads to reason 
about, nothing is explicitly shared in any way.

> 
> The co-routine style of asyncio (or Twisted’s inlineCallbacks) solves
> almost all of the problems that I think most people have with explicit
> asyncio (namely the callback hell) while still getting the benefits.

coroutines are still “inside out” and still have all the issues discussed in 
http://python-notes.curiousefficiency.org/en/latest/pep_ideas/async_programming.html
 which I also refer to in 
http://stackoverflow.com/questions/16491564/how-to-make-sqlalchemy-in-tornado-to-be-async/16503103#16503103.

> 
> Glyph wrote a good post that mirrors my opinions on implicit vs explicit
> here: https://glyph.twistedmatrix.com/2014/02/unyielding.html.

this is the post that most makes me think about the garbage collector analogy, 
re: “gevent works perfectly fine, but sorry, it just isn’t “correct”.  It 
should be feared! ”.   Unfortunately Glyph has orders of magnitude more 
intellectual capabilities than I do, so I am ultimately not an effective 
advocate for my position; hence I have my fallback career as a cheese maker 
lined up for when the async agenda finally takes over all computer programming.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

> On Nov 23, 2014, at 7:21 PM, Mike Bayer  wrote:
> 
> Given that, I’ve yet to understand why a system that implicitly defers CPU 
> use when a routine encounters IO, deferring to other routines, is relegated 
> to the realm of “magic”.   Is Python reference counting and garbage 
> collection “magic”?How can I be sure that my program is only declaring 
> memory, only as much as I expect, and then freeing it only when I absolutely 
> say so, the way async advocates seem to be about IO?   Why would a high level 
> scripting language enforce this level of low-level bookkeeping of IO calls as 
> explicit, when it is 100% predictable and automatable ?

The difference is that in the many years of Python programming I’ve had to 
think about garbage collection all of once. I’ve yet to write a non trivial 
implicit IO application where the implicit context switch didn’t break 
something and I had to think about adding explicit locks around things.

Really that’s what it comes down to. Either you need to enable explicit context 
switches (via callbacks or yielding, or whatever) or you need to add explicit 
locks. Neither solution allows you to pretend that context switching isn’t 
going to happen nor prevents you from having to deal with it. The reason I 
prefer explicit async is because the failure mode is better (if I forget to 
yield I don’t get the actual value so my thing blows up in development) and it 
ironically works more like blocking programming because I won’t get an implicit 
context switch in the middle of a function. Compare that to the implicit async 
where the failure mode is that at runtime something weird happens.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

> On Nov 23, 2014, at 7:29 PM, Mike Bayer  wrote:
> 
>> 
>> Glyph wrote a good post that mirrors my opinions on implicit vs explicit
>> here: https://glyph.twistedmatrix.com/2014/02/unyielding.html.
> 
> this is the post that most makes me think about the garbage collector 
> analogy, re: “gevent works perfectly fine, but sorry, it just isn’t 
> “correct”.  It should be feared! ”.   Unfortunately Glyph has orders of 
> magnitude more intellectual capabilities than I do, so I am ultimately not an 
> effective advocate for my position; hence I have my fallback career as a 
> cheese maker lined up for when the async agenda finally takes over all 
> computer programming.

Like I said, I’ve had to think about garbage collecting all of once in my 
entire Python career. Implicit might be theoretically nicer but until it can 
actually live up to the “gets out of my way-ness” of the abstractions you’re 
citing I’d personally much rather pass on it.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Integrating Barbican with KMIP

2014-11-23 Thread marcelod
Hello

I am new with openstack and I would like to use a KMIP server for key
storage and key retieval. I have lloking for different documentation but I
have had difficulties to change the default behavior of barbican.

Right now I am modifying the barbican-api.conf file.
I am replacing enabled_secretstore_plugins = store_crypto by
enabled_secretstore_plugins = kmip_secret_store.
I also modified the secret_store.py to change DEFAULT_PLUGINS =
['store_crypto'] to DEFAULT_PLUGINS = ['kmip_secret_store']

However it looks barbican is not calling the KMIPServer and is doing it
locally.
If I want to use the kmip_secret_store implementation, what could be
missing in my aproach?

Thanks in advance any help

Kind regards
Marcelo


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Mike Bayer

> On Nov 23, 2014, at 7:30 PM, Donald Stufft  wrote:
> 
> 
>> On Nov 23, 2014, at 7:21 PM, Mike Bayer  wrote:
>> 
>> Given that, I’ve yet to understand why a system that implicitly defers CPU 
>> use when a routine encounters IO, deferring to other routines, is relegated 
>> to the realm of “magic”.   Is Python reference counting and garbage 
>> collection “magic”?How can I be sure that my program is only declaring 
>> memory, only as much as I expect, and then freeing it only when I absolutely 
>> say so, the way async advocates seem to be about IO?   Why would a high 
>> level scripting language enforce this level of low-level bookkeeping of IO 
>> calls as explicit, when it is 100% predictable and automatable ?
> 
> The difference is that in the many years of Python programming I’ve had to 
> think about garbage collection all of once. I’ve yet to write a non trivial 
> implicit IO application where the implicit context switch didn’t break 
> something and I had to think about adding explicit locks around things.

that’s your personal experience, how is that an argument?  I deal with the 
Python garbage collector, memory management, etc. *all the time*.   I have a 
whole test suite dedicated to ensuring that SQLAlchemy constructs tear 
themselves down appropriately in the face of gc and such: 
https://github.com/zzzeek/sqlalchemy/blob/master/test/aaa_profiling/test_memusage.py
 .   This is the product of tons of different observed and reported issues 
about this operation or that operation forming constructs that would take up 
too much memory, wouldn’t be garbage collected when expected, etc.  

Yet somehow I still value very much the work that implicit GC does for me and I 
understand well when it is going to happen.  I don’t decide that that whole 
world should be forced to never have GC again.  I’m sure you wouldn’t be happy 
if I got Guido to drop garbage collection from Python because I showed how 
sometimes it makes my life more difficult, therefore we should all be managing 
memory explicitly.

I’m sure my agenda here is pretty transparent.  If explicit async becomes the 
only way to go, SQLAlchemy basically closes down.   I’d have to rewrite it 
completely (after waiting for all the DBAPIs that don’t exist to be written, 
why doesn’t anyone ever seem to be concerned about that?) , and it would run 
much less efficiently due to the massive amount of additional function call 
overhead incurred by the explicit coroutines.   It’s a pointless amount of 
verbosity within a scripting language.  

> 
> Really that’s what it comes down to. Either you need to enable explicit 
> context switches (via callbacks or yielding, or whatever) or you need to add 
> explicit locks. Neither solution allows you to pretend that context switching 
> isn’t going to happen nor prevents you from having to deal with it. The 
> reason I prefer explicit async is because the failure mode is better (if I 
> forget to yield I don’t get the actual value so my thing blows up in 
> development) and it ironically works more like blocking programming because I 
> won’t get an implicit context switch in the middle of a function. Compare 
> that to the implicit async where the failure mode is that at runtime 
> something weird happens.
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

> On Nov 23, 2014, at 7:55 PM, Mike Bayer  wrote:
> 
>> 
>> On Nov 23, 2014, at 7:30 PM, Donald Stufft  wrote:
>> 
>> 
>>> On Nov 23, 2014, at 7:21 PM, Mike Bayer  wrote:
>>> 
>>> Given that, I’ve yet to understand why a system that implicitly defers CPU 
>>> use when a routine encounters IO, deferring to other routines, is relegated 
>>> to the realm of “magic”.   Is Python reference counting and garbage 
>>> collection “magic”?How can I be sure that my program is only declaring 
>>> memory, only as much as I expect, and then freeing it only when I 
>>> absolutely say so, the way async advocates seem to be about IO?   Why would 
>>> a high level scripting language enforce this level of low-level bookkeeping 
>>> of IO calls as explicit, when it is 100% predictable and automatable ?
>> 
>> The difference is that in the many years of Python programming I’ve had to 
>> think about garbage collection all of once. I’ve yet to write a non trivial 
>> implicit IO application where the implicit context switch didn’t break 
>> something and I had to think about adding explicit locks around things.
> 
> that’s your personal experience, how is that an argument?  I deal with the 
> Python garbage collector, memory management, etc. *all the time*.   I have a 
> whole test suite dedicated to ensuring that SQLAlchemy constructs tear 
> themselves down appropriately in the face of gc and such: 
> https://github.com/zzzeek/sqlalchemy/blob/master/test/aaa_profiling/test_memusage.py
>  .   This is the product of tons of different observed and reported issues 
> about this operation or that operation forming constructs that would take up 
> too much memory, wouldn’t be garbage collected when expected, etc.  
> 
> Yet somehow I still value very much the work that implicit GC does for me and 
> I understand well when it is going to happen.  I don’t decide that that whole 
> world should be forced to never have GC again.  I’m sure you wouldn’t be 
> happy if I got Guido to drop garbage collection from Python because I showed 
> how sometimes it makes my life more difficult, therefore we should all be 
> managing memory explicitly.

Eh, Maybe you need to do that, that’s fine I suppose. Though the option isn’t 
between something with a very clear failure condition and something with a 
“weird things start happening” error condition. It’s between “weird things 
start happening” and “weird things start happening, just they are less likely 
to happen less”. Implicit context switches introduce a new harder to debug 
failure mode over blocking code that explicit context switches do not.

> 
> I’m sure my agenda here is pretty transparent.  If explicit async becomes the 
> only way to go, SQLAlchemy basically closes down.   I’d have to rewrite it 
> completely (after waiting for all the DBAPIs that don’t exist to be written, 
> why doesn’t anyone ever seem to be concerned about that?) , and it would run 
> much less efficiently due to the massive amount of additional function call 
> overhead incurred by the explicit coroutines.   It’s a pointless amount of 
> verbosity within a scripting language.  

I don’t really take performance issues that seriously for CPython. If you care 
about performance you should be using PyPy. I like that argument though because 
the same argument is used against the GCs which you like to use as an example 
too.

The verbosity isn’t really pointless, you have to be verbose in either 
situation, either explicit locks or explicit context switches. If you don’t 
have explicit locks you just have buggy software instead.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Mike Bayer

> On Nov 23, 2014, at 8:23 PM, Donald Stufft  wrote:
> 
> I don’t really take performance issues that seriously for CPython. If you 
> care about performance you should be using PyPy. I like that argument though 
> because the same argument is used against the GCs which you like to use as an 
> example too.
> 
> The verbosity isn’t really pointless, you have to be verbose in either 
> situation, either explicit locks or explicit context switches. If you don’t 
> have explicit locks you just have buggy software instead.

Funny thing is that relational databases will lock on things whether or not the 
calling code is using an async system.  Locks are a necessary thing in many 
cases.  That lock-based concurrency code can’t be mathematically proven bug 
free doesn’t detract from its vast usefulness in situations that are not 
aeronautics or medical devices.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

> On Nov 23, 2014, at 9:09 PM, Mike Bayer  wrote:
> 
> 
>> On Nov 23, 2014, at 8:23 PM, Donald Stufft  wrote:
>> 
>> I don’t really take performance issues that seriously for CPython. If you 
>> care about performance you should be using PyPy. I like that argument though 
>> because the same argument is used against the GCs which you like to use as 
>> an example too.
>> 
>> The verbosity isn’t really pointless, you have to be verbose in either 
>> situation, either explicit locks or explicit context switches. If you don’t 
>> have explicit locks you just have buggy software instead.
> 
> Funny thing is that relational databases will lock on things whether or not 
> the calling code is using an async system.  Locks are a necessary thing in 
> many cases.  That lock-based concurrency code can’t be mathematically proven 
> bug free doesn’t detract from its vast usefulness in situations that are not 
> aeronautics or medical devices.

Sure, databases will do it regardless so they aren’t a very useful topic of 
discussion here since their operation is external to the system being developed 
and they will operate the same regardless.

There’s a long history of implicit context switches causing buggy software that 
breaks. As far as I can tell the only downsides to explicit context switches 
that don’t stem from an inferior interpreter seem to be “some particular API in 
my head isn’t as easy with it” and “I have to type more letters”. The first one 
I’d just say that constraints make the system and that there are lots of APIs 
which aren’t really possible or easy in Python because of one design decision 
or another. For the second one I’d say that Python isn’t a language which 
attempts to make code shorter, just easier to understand what is going to 
happen when.

Throwing out hyperboles like “mathematically proven” isn’t a particular 
valuable statement. It is *easier* to reason about what’s going to happen with 
explicit context switches. Maybe you’re a better programmer than I am and 
you’re able to keep in your head every place that might do an implicit context 
switch in an implicit setup and you can look at a function and go “ah yup, 
things are going to switch here and here”. I certainly can’t. I like my 
software to maximize the ability to locally reason about a particular chunk of 
code.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] proposal: alternating weekly meeting time [doodle poll created]

2014-11-23 Thread Richard Jones
Thanks everyone, I've closed the poll. I'm sorry to say that there's no
combination of two timeslots which allows everyone to attend a meeting. Of
the 25 respondents, the best we can do is cater for 24 of you.

Optimising for the maximum number of attendees, the potential meeting times
are 2000 UTC Tuesday and 1000 UTC on one of Monday, Wednesday or Friday. In
all three cases the only person who has indicated they cannot attend is
Lifeless.

Unfortunately, David has indicated that he can't be present at the Tuesday
2000 UTC slot. Optimising for him as a required attendee for both meetings
means we lose an additional attendee, and gives us the Wednesday 2000 UTC
slot and a few options:

- Monday, Wednesday and Thursday at 1200 UTC (Lifeless and ygbo miss)
- Friday at 1200 UTC (Lifeless and Jaromir Coufal miss)

If anyone else would like to play with the timeslots and numbers, I can
pass on the excel sheet and my code :)

According to the meetings wiki page, we should be able to get an IRC room
at any of the above times.


  Richard

On Wed Nov 19 2014 at 9:15:40 AM Richard Jones 
wrote:

> Thanks everyone who has responded. I'm going to leave the poll open until
> the weekend to allow for stragglers to get their times in, and then close
> it and we can see what the results are.
>
> The poll is at https://doodle.com/47h3f35nad62ncnf scroll to the far
> right to set your timezone.
>
>
>  Richard
> On 12 November 2014 12:45, Richard Jones  wrote:
>
>> I have set up a doodle poll to let folk enter their preferred times. It's
>> in UTC/GMT (/London time, because doodle) so use something like
>> http://everytimezone.com/ to figure that out :)
>>
>> https://doodle.com/47h3f35nad62ncnf
>>
>>
>>  Richard
>>
>> On 11 November 2014 18:46, Matthias Runge  wrote:
>>
>>> On 11/11/14 08:09, Richard Jones wrote:
>>> > Hi all,
>>> >
>>> > At the summit meetup last week I proposed that the Horizon weekly
>>> > meeting time alternate between the current time and something more
>>> > suitable for those of us closer to UTC+10. I'd like to get an
>>> indication
>>> > of the interest in this, and I'll look into getting a second meeting
>>> > time booked for alternating weeks based on your feedback.
>>> >
>>> > As a starting point, I'd like to suggest the times alternate between
>>> UTC
>>> >  and 1600 (the current time).
>>>
>>> Sadly, both times don't work for me. I would propose something like 8
>>> UTC, which should work for most folks located in EU and east, or 18 UTC.
>>>
>>> Matthias
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-23 Thread Mike Perez
On 07:27 Tue 18 Nov , Duncan Thomas wrote:
> Is the new driver drop-in compatible with the old one? IF not, can existing
> systems be upgraded to the new driver via some manual steps, or is it
> basically a completely new driver with similar functionality?
> 
> On 17 November 2014 07:08, Drew Fisher  wrote:
> > We (here at Oracle) have a replacement for this driver which includes
> > local ZFS, iSCSI and FC drivers all with ZFS as the underlying driver.
> > We're in the process of getting CI set up so we can contribute the
> > driver upstream along with our ZFSSA driver (which is already in the
> tree).
> >
> > If anybody has more questions about this, please let me know. The
> > driver is in the open for folks to look at and if anybody wants us to
> > start upstream integration for it, we'll be happy to do so.
> >
> > -Drew
> >
> >
> > On 11/16/14, 8:45 PM, Mike Perez wrote:
> >> The Open Solaris ZFS driver [1] is currently missing a lot of the minimum
> >> features [2] that the Cinder team requires with all drivers. As a
> result, it's
> >> really broken.
> >>
> >> I wanted to gauge who is using it, and if anyone was interested in
> fixing the
> >> driver. If there is not any activity with this driver, I would like to
> propose
> >> it to be deprecated for removal.
> >>
> >> [1] -
> https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/san/solaris.py
> >> [2] -
> http://docs.openstack.org/developer/cinder/devref/drivers.html#minimum-features
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> Duncan Thomas

Drew can you answer Duncan's question? I would like to get a head start on
deprecating the driver, or expect your replacement this release to be
compatible with the existing one.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-23 Thread Solly Ross
Whoops, that should say "assertions" not "exceptions".

- Original Message -
> From: "Solly Ross" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Monday, November 24, 2014 12:00:44 AM
> Subject: Re: [openstack-dev] [nova] Proposal new hacking rules
> 
> Well, at least the message for exceptions in Nova says "expected" and
> "observed".
> I suspect that it's part of our custom test case classes.
> 
> Best Regards,
> Solly Ross
> 
> 
> - Original Message -
> > From: "Matthew Treinish" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Sent: Friday, November 21, 2014 5:23:28 PM
> > Subject: Re: [openstack-dev] [nova] Proposal new hacking rules
> > 
> > On Fri, Nov 21, 2014 at 04:15:07PM -0500, Sean Dague wrote:
> > > On 11/21/2014 01:52 PM, Matthew Treinish wrote:
> > > > On Fri, Nov 21, 2014 at 07:15:49PM +0100, jordan pittier wrote:
> > > >> Hey,
> > > >> I am not a Nova developer but I still have an opinion.
> > > >>
> > > >>> Using boolean assertions
> > > >> I like what you propose. We should use and enforce the assert* that
> > > >> best
> > > >> matches the intention. It's about semantic and the more precise we
> > > >> are,
> > > >> the better.
> > > >>
> > > >>> Using same order of arguments in equality assertions
> > > >> Why not. But I don't know how we can write a Hacking rule for this. So
> > > >> you may fix all the occurrences for this now, but it might get back in
> > > >> the future.
> > > > 
> > > > Ok I'll bite, besides the enforceability issue which you pointed out,
> > > > it
> > > > just
> > > > doesn't make any sense, you're asserting 2 things are equal: (A == B)
> > > > ==
> > > > (B == A)
> > > > and I honestly feel that it goes beyond nitpicking because of that.
> > > > 
> > > > It's also a fallacy that there will always be an observed value and an
> > > > expected value. For example:
> > > > 
> > > >   self.assertEqual(method_a(), method_b())
> > > > 
> > > > Which one is observed and which one is expected? I think this proposal
> > > > is
> > > > just
> > > > reading into the parameter names a bit too much.
> > > 
> > > If you are using assertEqual with 2 variable values... you are doing
> > > your test wrong.
> > > 
> > > I was originally in your camp. But honestly, the error message provided
> > > to the user does say expected and observed, and teaching everyone that
> > > you have to ignore the error message is a much harder thing to do than
> > > flip the code to conform to it, creating less confusion.
> > > 
> > 
> > Uhm, no it doesn't, the default error message is "A != B". [1][2][3] (both
> > with
> > unittest and testtools) If the error message was like that, then sure
> > saying
> > one way was right over the other would be fine, (assuming you didn't
> > specify
> > a
> > different error message) but, that's not what it does.
> > 
> > 
> > [1]
> > https://github.com/testing-cabal/testtools/blob/master/testtools/testcase.py#L340
> > [2]
> > https://github.com/testing-cabal/testtools/blob/master/testtools/matchers/_basic.py#L85
> > [3]
> > https://hg.python.org/cpython/file/301d62ef5c0b/Lib/unittest/case.py#l508
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-23 Thread Solly Ross
Well, at least the message for exceptions in Nova says "expected" and 
"observed".
I suspect that it's part of our custom test case classes.

Best Regards,
Solly Ross


- Original Message -
> From: "Matthew Treinish" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Friday, November 21, 2014 5:23:28 PM
> Subject: Re: [openstack-dev] [nova] Proposal new hacking rules
> 
> On Fri, Nov 21, 2014 at 04:15:07PM -0500, Sean Dague wrote:
> > On 11/21/2014 01:52 PM, Matthew Treinish wrote:
> > > On Fri, Nov 21, 2014 at 07:15:49PM +0100, jordan pittier wrote:
> > >> Hey,
> > >> I am not a Nova developer but I still have an opinion.
> > >>
> > >>> Using boolean assertions
> > >> I like what you propose. We should use and enforce the assert* that best
> > >> matches the intention. It's about semantic and the more precise we are,
> > >> the better.
> > >>
> > >>> Using same order of arguments in equality assertions
> > >> Why not. But I don't know how we can write a Hacking rule for this. So
> > >> you may fix all the occurrences for this now, but it might get back in
> > >> the future.
> > > 
> > > Ok I'll bite, besides the enforceability issue which you pointed out, it
> > > just
> > > doesn't make any sense, you're asserting 2 things are equal: (A == B) ==
> > > (B == A)
> > > and I honestly feel that it goes beyond nitpicking because of that.
> > > 
> > > It's also a fallacy that there will always be an observed value and an
> > > expected value. For example:
> > > 
> > >   self.assertEqual(method_a(), method_b())
> > > 
> > > Which one is observed and which one is expected? I think this proposal is
> > > just
> > > reading into the parameter names a bit too much.
> > 
> > If you are using assertEqual with 2 variable values... you are doing
> > your test wrong.
> > 
> > I was originally in your camp. But honestly, the error message provided
> > to the user does say expected and observed, and teaching everyone that
> > you have to ignore the error message is a much harder thing to do than
> > flip the code to conform to it, creating less confusion.
> > 
> 
> Uhm, no it doesn't, the default error message is "A != B". [1][2][3] (both
> with
> unittest and testtools) If the error message was like that, then sure saying
> one way was right over the other would be fine, (assuming you didn't specify
> a
> different error message) but, that's not what it does.
> 
> 
> [1]
> https://github.com/testing-cabal/testtools/blob/master/testtools/testcase.py#L340
> [2]
> https://github.com/testing-cabal/testtools/blob/master/testtools/matchers/_basic.py#L85
> [3] https://hg.python.org/cpython/file/301d62ef5c0b/Lib/unittest/case.py#l508
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Online midcycle meetup

2014-11-23 Thread Angus Salkeld
On Fri, Nov 21, 2014 at 1:31 AM, Brad Topol  wrote:

> Angus,
>
> This may sound crazy but  what if in addition to having the online meetup
> you denoted two different locations as an optional physical meetup?

  That way you would get some of the benefits of having folks meet together
> in person while not forcing everyone to have to travel across the globe. So
> for example, if you had one location in Raleigh and one wherever else folks
> are co-located  you could still get the benefits of having some group of
> folks collaborating face to face.
>

Hi Brad

Yeah, that might help.
I'll leave it to people in these locations to chip in (I am in Brisbane, AU
and there are not too many close-by Heat hackers).

-Angus


>
> Just a thought.
>
> --Brad
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet:  bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
>
>
> From:Angus Salkeld 
> To:"OpenStack Development Mailing List (not for usage questions)"
> ,
> Date:11/19/2014 06:56 PM
> Subject:[openstack-dev] [Heat] Online midcycle meetup
> --
>
>
>
> Hi all
>
> As agreed from our weekly meeting we are going to try an online meetup.
>
> Why?
>
> We did a poll (*https://doodle.com/b9m4bf8hvm3mna97#table*
> ) and it is
> split quite evenly by location. The story I am getting from the community
> is:
>
> "We want a midcycle meetup if it is nearby, but are having trouble getting
> finance
> to travel far."
>
> Given that the Heat community is evenly spread across the globe this
> becomes
> impossible to hold without excluding a significant group.
>
> So let's try and figure out how to do an online meetup!
> (but let's not spend 99% of the time arguing about the software to use
> please)
>
> I think more interesting is:
>
> 1) How do we minimize the time zone pain?
> 2) Can we make each session really focused so we are productive.
> 3) If we do this right it does not have to be "midcycle" but when ever we
> want.
>
> I'd be interested in feedback from others that have tried this too.
>
> -Angus
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev