[Openstack] problem with uploading virtual image

2012-03-02 Thread Mahdi Njim
Hi
I want to upload an image to openstack, but even after I set the proper
credentials, when I use uec-publish-tarball I this error message:
cert must be specified.
private key must be specified.
user must be specified.
ec2cert must be specified.
when I tape euca-describe-image or nova list it works just fine.
Any Ideas??
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with uploading virtual image

2012-03-02 Thread Razique Mahroua
Hi Mahdi,how did you set them ? Via env variables or into your bashrc (or any sourceable file)Thanks
Nuage & Co - Razique Mahroua razique.mahr...@gmail.com

Le 2 mars 2012 à 09:39, Mahdi Njim a écrit :Hi I want to upload an image to openstack, but even after I set the proper credentials, when I use uec-publish-tarball I this error message:cert must be specified.private key must be specified.
user must be specified.ec2cert must be specified.when I tape euca-describe-image or nova list it works just fine.Any Ideas??
___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 milestone available for Keystone, Glance, Nova and Horizon

2012-03-02 Thread Thierry Carrez
Alexey Eromenko wrote:
>> https://launchpad.net/keystone/essex/essex-4
>> https://launchpad.net/glance/essex/essex-4
>> https://launchpad.net/nova/essex/essex-4
>> https://launchpad.net/horizon/essex/essex-4
> 
> Great news, indeed !
> 
> I would like to ask about "openstack-manuals-M4" and "swift-M4".
> What about those projects ?
> (Since they are core projects, having a M4 release for them is important)

Swift does not follow the common milestones. The latest release is
1.4.6, and tha last release available will be included in OpenStack
2012.1 common release.

Openstack Manuals is not a core project, it's the official OpenStack
documentation effort. So it's not formally "released" at milestones.

> Also when RC1 is scheduled ?
> About ~15 Mar ?

The release candidate (for each project) will be built once we get rid
of all targeted bugs, in all cases before the end of the month. I hope
we'll reach that point in two weeks, yes.

Regards,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 milestone available for Keystone, Glance, Nova and Horizon

2012-03-02 Thread Christian Berendt
Hi ttx.

On 03/02/2012 11:18 AM, Thierry Carrez wrote:
> Swift does not follow the common milestones. The latest release is
> 1.4.6, and tha last release available will be included in OpenStack
> 2012.1 common release.

We'll this be fixed in the future?

Bye, Christian.

-- 
Christian Berendt
Linux/Unix Consultant & Developer
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Day, Phil
In our experience (running clusters of several hundred nodes) the DB 
performance is not generally the significant factor, so making its calls 
non-blocking  gives only a very small increase in processing capacity and 
creates other side effects in terms of slowing all eventlets down as they wait 
for their turn to run.

That shouldn't really be surprising given that the Nova DB is pretty small and 
MySQL is a pretty good DB - throw reasonable hardware at the DB server and give 
it a bit of TLC from a DBA (remove deleted entries from the DB, add indexes 
where the slow query log tells you to, etc) and it shouldn't be the bottleneck 
in the system for performance or scalability.

We use the python driver and have experimented with allowing the eventlet code 
to make the db calls non-blocking (its not the default setting), and it works, 
but didn't give us any significant advantage.

For example in the API server (before we made it properly multi-threaded) with 
blocking db calls the server was essentially a serial processing queue - each 
request was fully processed before the next.  With non-blocking db calls we got 
a lot more apparent concurrencybut only at the expense of making all of the 
requests equally bad.

Consider a request takes 10 seconds, where after 5 seconds there is a call to 
the DB which takes 1 second, and three are started at the same time:

Blocking:
0 - Request 1 starts
10 - Request 1 completes, request 2 starts
20 - Request 2 completes, request 3 starts
30 - Request 3 competes
Request 1 completes in 10 seconds
Request 2 completes in 20 seconds
Request 3 completes in 30 seconds
Ave time: 20 sec


Non-blocking
0 - Request 1 Starts
5 - Request 1 gets to db call, request 2 starts
10 - Request 2 gets to db call, request 3 starts
15 - Request 3 gets to db call, request 1 resumes
19 - Request 1 completes, request 2 resumes
23 - Request 2 completes,  request 3 resumes
27 - Request 3 completes

Request 1 completes in 19 seconds  (+ 9 seconds)
Request 2 completes in 24 seconds (+ 4 seconds)
Request 3 completes in 27 seconds (- 3 seconds)
Ave time: 20 sec
 
So instead of worrying about making db calls non-blocking we've been working to 
make certain eventlets non-blocking - i.e. add sleep(0) calls to long running 
iteration loops - which IMO has a much bigger impact on the performance of the 
apparent latency of the system.

Phil



-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Brian Lamar
Sent: 01 March 2012 21:31
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

>> How is MySQL access handled in eventlet? Presumably it's external C 
>> library so it's not going to be monkey patched. Does that make every 
>> db access call a blocking call? Thanks,

> Nope, it goes through a thread pool.

I feel like this might be an over-simplification. If the question is:

"How is MySQL access handled in nova?"

The answer would be that we use SQLAlchemy which can load any number of 
SQL-drivers. These drivers can be either pure Python or C-based drivers. In the 
case of pure Python drivers, monkey patching can occur and db calls are 
non-blocking. In the case of drivers which contain C code (or perhaps other 
blocking calls), db calls will most likely be blocking.

If the question is "How is MySQL access handled in eventlet?" the answer would 
be to use the eventlet.db_pool module to allow db access using thread pools.

B

-Original Message-
From: "Adam Young" 
Sent: Thursday, March 1, 2012 3:27pm
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

On 03/01/2012 02:45 PM, Yun Mao wrote:
> There are plenty eventlet discussion recently but I'll stick my 
> question to this thread, although it's pretty much a separate 
> question. :)
>
> How is MySQL access handled in eventlet? Presumably it's external C 
> library so it's not going to be monkey patched. Does that make every 
> db access call a blocking call? Thanks,

Nope, it goes through a thread pool.
>
> Yun
>
> On Wed, Feb 29, 2012 at 9:18 PM, Johannes Erdfelt  
> wrote:
>> On Wed, Feb 29, 2012, Yun Mao  wrote:
>>> Thanks for the explanation. Let me see if I understand this.
>>>
>>> 1. Eventlet will never have this problem if there is only 1 OS 
>>> thread
>>> -- let's call it main thread.
>> In fact, that's exactly what Python calls it :)
>>
>>> 2. In Nova, there is only 1 OS thread unless you use xenapi and/or 
>>> the virt/firewall driver.
>>> 3. The python logging module uses locks. Because of the monkey 
>>> patch, those locks are actually eventlet or "green" locks and may 
>>> trigger a green thread context switch.
>>>
>>> Based on 1-3, does it make sense to say that in the other OS threads 
>>> (i.e. not main thread), if logging (plus other pure python library 
>>> code involving locking) is never used, and we do not run a eventlet 
>>> hub at all, we should never s

Re: [Openstack] Essex-4 milestone available for Keystone, Glance, Nova and Horizon

2012-03-02 Thread Alexey Eromenko
> On 03/02/2012 11:18 AM, Thierry Carrez wrote:
>> Swift does not follow the common milestones. The latest release is
>> 1.4.6, and tha last release available will be included in OpenStack
>> 2012.1 common release.
>
> We'll this be fixed in the future?

You mean syncing swift with O-S milestones ?
Well, I'm not sure it needs to be "fixed".

Reason: Swift is useful without other components.
(Assuming stable API and conservative release cycle.)

Wikipedia chooses OpenStack Swift:
http://blog.wikimedia.org/2012/02/09/scaling-media-storage-at-wikimedia-with-swift/

... but I definitely believe, that O-S-manuals need milestone
releases, to be tested together with the bunch.
(currently parts of the docs do not match the software. At least Nova-docs)
-- 
-Alexey Eromenko "Technologov"

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 milestone available for Keystone, Glance, Nova and Horizon

2012-03-02 Thread Thierry Carrez
Christian Berendt wrote:
> Hi ttx.
> 
> On 03/02/2012 11:18 AM, Thierry Carrez wrote:
>> Swift does not follow the common milestones. The latest release is
>> 1.4.6, and tha last release available will be included in OpenStack
>> 2012.1 common release.
> 
> We'll this be fixed in the future?

I would definitely prefer that all core projects follow the same
(time-based) milestones, as it makes the release management work (and
the communication around what's being worked on, which helps building a
community) a lot simpler.

That said, I respect that Swift is in a more mature,
slower-rate-of-change development state that makes frequent releases
more appropriate. So before pushing for convergence, I would encourage a
model that also does frequent releases for the other core projects.

Expect a discussion on that at the design summit.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 milestone available for Keystone, Glance, Nova and Horizon

2012-03-02 Thread Chmouel Boudjnah
On Fri, Mar 2, 2012 at 10:23 AM, Christian Berendt
 wrote:
>> Swift does not follow the common milestones. The latest release is
>> 1.4.6, and tha last release available will be included in OpenStack
>> 2012.1 common release.
> We'll this be fixed in the future?

IMO I don't see it as a bug as swift has a need for more frequent release.

Chmouel.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 milestone available for Keystone, Glance, Nova and Horizon

2012-03-02 Thread Christian Berendt
> You mean syncing swift with O-S milestones ?
> Well, I'm not sure it needs to be "fixed".

Yes.

> Reason: Swift is useful without other components.
> (Assuming stable API and conservative release cycle.)

I can also use Glance without the other components, it's also included
in the release cycles.

I would find it more intuitive to have one release cycle for all core
and incubator projects. At the moment only swift has an other release cycle.

Did it have just said, there is no real problem...

Bye, Christian.

-- 
Christian Berendt
Linux/Unix Consultant & Developer
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 milestone available for Keystone, Glance, Nova and Horizon

2012-03-02 Thread Chuck Short
On Fri, 2 Mar 2012 10:45:36 +0800
Shake Chen  wrote:

> Hi
> 
> In ubuntu 12.04,  pool seem not update .
> 
> the package still is old, like keystone:
> http://archive.ubuntu.com/ubuntu/pool/universe/k/keystone/
> 
> # apt-cache policy keystone
> keystone:
>   Installed: (none)
>   Candidate: 2012.1~e4~20120203.1574-0ubuntu2
>   Version table:
>  2012.1~e4~20120203.1574-0ubuntu2 0
> 500 http://cn.archive.ubuntu.com/ubuntu/ precise/universe
> amd64 Packages
> 
> 
> On Fri, Mar 2, 2012 at 3:27 AM, Thierry Carrez
> wrote:
> 
> > Hi everyone,
> >
> > The last milestone of the Essex cycle is now available for Keystone,
> > Glance, Nova and Horizon. It provides a feature-complete beta of the
> > upcoming 2012.1 release, scheduled for final release on April 5.
> >
> > You can see the full list of new features and fixed bugs, as well as
> > tarball downloads, at:
> >
> > https://launchpad.net/keystone/essex/essex-4
> > https://launchpad.net/glance/essex/essex-4
> > https://launchpad.net/nova/essex/essex-4
> > https://launchpad.net/horizon/essex/essex-4
> >
> > The work now shifts on testing and preparing our final release
> > candidates. You should target release-critical bugs for each
> > project to their essex-rc1 milestone. we have one month left to
> > make OpenStack 2012.1 rock !
> >
> > Regards,
> >
> > --
> > Thierry Carrez (ttx)
> > Release Manager, OpenStack
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> 
> 
> 

Hi,

It will be done today developers need sleep as well.

Regards
chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] The road to Essex release (a.k.a. what is RC1?)

2012-03-02 Thread Thierry Carrez
Hello everyone,

With E4 out of the door, here is some explanation on how we'll
collectively handle the last 5 weeks before release. The idea for each
project is to come up, in the weeks before final release, with a release
candidate that they are happy with. On April 5 we'll release all those
candidates under the common OpenStack 2012.1 "Essex" umbrella. The
sooner we get the final RC, the easier it is for everyone.

To that effect, we created essex-rc1 milestones for each of
{Glance,Nova,Keystone,Horizon}, to which bugs can be targeted. When the
bug list is empty, that means we think we've come up with a potential
release candidate, which we will publish. At the same time, we'll open
Folsom for development.

So if you think a given bug absolutely needs to be fixed before Essex
can come out, make sure it is targeted to essex-rc1. If you can't do
that yourself, ping me or one of the projects drivers to do it for you.

In case release-critical bugs are found after the RC1 is published,
we'll start another RC cycle (essex-rc2), etc. As we get closer to final
release date, we get more picky as to whether a bug is indeed
release-critical, or just a known issue we'll document and have to live
with. Expect bugs to be removed from the RC list.

Starting now, core reviewers should apply extra caution in their
reviews. New features are forbidden, and bugfixes that are likely to
introduce a regression should be avoided (a known bug is often better
than an unknown regression). In order to minimize the impact for
testers, technical writers, translators and packagers, bugfixes that
change the DB schema, add files, add configuration options, add
dependencies or add new messages should also be avoided if possible.

In a couple of weeks, we'll switch to extreme caution, and only targeted
bugs will be allowed in the release branch.

PS: Swift will use 1.4.7 as their Essex release candidate, so expect
conservative changes in that version.

Happy testing !

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 horizon error

2012-03-02 Thread Thierry Carrez
darkfower wrote:
> i get horizon(essex-4) software package from 
> https://launchpad.net/horizon/essex/essex-4/+download/horizon-2012.1%7Ee4.tar.gz,
>  
> but i tar xf horizon compress package after, when i exec python setup.py
> install emerge error , as follows:
> 
> Traceback (most recent call last):
>   File "setup.py", line 35, in 
> long_description=read('README.rst'),
>   File "setup.py", line 27, in read
> return open(os.path.join(os.path.dirname(__file__), fname)).read()
> IOError: [Errno 2] No such file or directory: 'README.rst'

Yes, apparently the Horizon E4 tarball is very incomplete.

Unfortunately, nobody seems to actually test the milestone-proposed
tarballs, so this kind of issue is only detected after publication. The
fact that the directory structure was silently changed just before E4
doesn't really help either.

Follow https://bugs.launchpad.net/horizon/+bug/944763 for progress on
this issue.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 05:34 AM, Day, Phil wrote:

In our experience (running clusters of several hundred nodes) the DB 
performance is not generally the significant factor, so making its calls 
non-blocking  gives only a very small increase in processing capacity and 
creates other side effects in terms of slowing all eventlets down as they wait 
for their turn to run.


Yes, I believe I said that this was the case at the last design summit 
-- or rather, I believe I said "is there any evidence that the database 
is a performance or scalability problem at all"?



That shouldn't really be surprising given that the Nova DB is pretty small and 
MySQL is a pretty good DB - throw reasonable hardware at the DB server and give 
it a bit of TLC from a DBA (remove deleted entries from the DB, add indexes 
where the slow query log tells you to, etc) and it shouldn't be the bottleneck 
in the system for performance or scalability.


++


We use the python driver and have experimented with allowing the eventlet code 
to make the db calls non-blocking (its not the default setting), and it works, 
but didn't give us any significant advantage.


Yep, identical results to the work that Mark Washenberger did on the 
same subject.



For example in the API server (before we made it properly multi-threaded)


By "properly multi-threaded" are you instead referring to making the 
nova-api server multi-*processed* with eventlet greenthread pools in 
each process? i.e. The way Swift (and now Glance) works? Or are you 
referring to a different approach entirely?


> with blocking db calls the server was essentially a serial processing 
queue - each request was fully processed before the next.  With 
non-blocking db calls we got a lot more apparent concurrencybut only at 
the expense of making all of the requests equally bad.


Yep, not surprising.


Consider a request takes 10 seconds, where after 5 seconds there is a call to 
the DB which takes 1 second, and three are started at the same time:

Blocking:
0 - Request 1 starts
10 - Request 1 completes, request 2 starts
20 - Request 2 completes, request 3 starts
30 - Request 3 competes
Request 1 completes in 10 seconds
Request 2 completes in 20 seconds
Request 3 completes in 30 seconds
Ave time: 20 sec

Non-blocking
0 - Request 1 Starts
5 - Request 1 gets to db call, request 2 starts
10 - Request 2 gets to db call, request 3 starts
15 - Request 3 gets to db call, request 1 resumes
19 - Request 1 completes, request 2 resumes
23 - Request 2 completes,  request 3 resumes
27 - Request 3 completes

Request 1 completes in 19 seconds  (+ 9 seconds)
Request 2 completes in 24 seconds (+ 4 seconds)
Request 3 completes in 27 seconds (- 3 seconds)
Ave time: 20 sec

So instead of worrying about making db calls non-blocking we've been working to 
make certain eventlets non-blocking - i.e. add sleep(0) calls to long running 
iteration loops - which IMO has a much bigger impact on the performance of the 
apparent latency of the system.


Yep, and I think adding a few sleep(0) calls in various places in the 
Nova codebase (as was recently added in the _sync_power_states() 
periodic task) is an easy and simple win with pretty much no ill 
side-effects. :)


Curious... do you have a list of all the places where sleep(0) calls 
were inserted in the HP Nova code? I can turn that into a bug report and 
get to work on adding them...


All the best,
-jay


Phil



-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Brian Lamar
Sent: 01 March 2012 21:31
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness


How is MySQL access handled in eventlet? Presumably it's external C
library so it's not going to be monkey patched. Does that make every
db access call a blocking call? Thanks,



Nope, it goes through a thread pool.


I feel like this might be an over-simplification. If the question is:

"How is MySQL access handled in nova?"

The answer would be that we use SQLAlchemy which can load any number of 
SQL-drivers. These drivers can be either pure Python or C-based drivers. In the 
case of pure Python drivers, monkey patching can occur and db calls are 
non-blocking. In the case of drivers which contain C code (or perhaps other 
blocking calls), db calls will most likely be blocking.

If the question is "How is MySQL access handled in eventlet?" the answer would 
be to use the eventlet.db_pool module to allow db access using thread pools.

B

-Original Message-
From: "Adam Young"
Sent: Thursday, March 1, 2012 3:27pm
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

On 03/01/2012 02:45 PM, Yun Mao wrote:

There are plenty eventlet discussion recently but I'll stick my
question to this thread, although it's pretty much a separate
question. :)

How is MySQL access handled in eventlet? Presumably it's external C
library so it's not going to

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Armando Migliaccio


> -Original Message-
> From: openstack-bounces+armando.migliaccio=eu.citrix@lists.launchpad.net
> [mailto:openstack-
> bounces+armando.migliaccio=eu.citrix@lists.launchpad.net] On Behalf Of Jay
> Pipes
> Sent: 02 March 2012 15:17
> To: openstack@lists.launchpad.net
> Subject: Re: [Openstack] eventlet weirdness
> 
> On 03/02/2012 05:34 AM, Day, Phil wrote:
> > In our experience (running clusters of several hundred nodes) the DB
> performance is not generally the significant factor, so making its calls non-
> blocking  gives only a very small increase in processing capacity and creates
> other side effects in terms of slowing all eventlets down as they wait for
> their turn to run.
> 
> Yes, I believe I said that this was the case at the last design summit
> -- or rather, I believe I said "is there any evidence that the database is a
> performance or scalability problem at all"?
> 
> > That shouldn't really be surprising given that the Nova DB is pretty small
> and MySQL is a pretty good DB - throw reasonable hardware at the DB server and
> give it a bit of TLC from a DBA (remove deleted entries from the DB, add
> indexes where the slow query log tells you to, etc) and it shouldn't be the
> bottleneck in the system for performance or scalability.
> 
> ++
> 
> > We use the python driver and have experimented with allowing the eventlet
> code to make the db calls non-blocking (its not the default setting), and it
> works, but didn't give us any significant advantage.
> 
> Yep, identical results to the work that Mark Washenberger did on the same
> subject.
> 
> > For example in the API server (before we made it properly
> > multi-threaded)
> 
> By "properly multi-threaded" are you instead referring to making the nova-api
> server multi-*processed* with eventlet greenthread pools in each process? i.e.
> The way Swift (and now Glance) works? Or are you referring to a different
> approach entirely?
> 
>  > with blocking db calls the server was essentially a serial processing queue
> - each request was fully processed before the next.  With non-blocking db
> calls we got a lot more apparent concurrencybut only at the expense of making
> all of the requests equally bad.
> 
> Yep, not surprising.
> 
> > Consider a request takes 10 seconds, where after 5 seconds there is a call
> to the DB which takes 1 second, and three are started at the same time:
> >
> > Blocking:
> > 0 - Request 1 starts
> > 10 - Request 1 completes, request 2 starts
> > 20 - Request 2 completes, request 3 starts
> > 30 - Request 3 competes
> > Request 1 completes in 10 seconds
> > Request 2 completes in 20 seconds
> > Request 3 completes in 30 seconds
> > Ave time: 20 sec
> >
> > Non-blocking
> > 0 - Request 1 Starts
> > 5 - Request 1 gets to db call, request 2 starts
> > 10 - Request 2 gets to db call, request 3 starts
> > 15 - Request 3 gets to db call, request 1 resumes
> > 19 - Request 1 completes, request 2 resumes
> > 23 - Request 2 completes,  request 3 resumes
> > 27 - Request 3 completes
> >
> > Request 1 completes in 19 seconds  (+ 9 seconds) Request 2 completes
> > in 24 seconds (+ 4 seconds) Request 3 completes in 27 seconds (- 3
> > seconds) Ave time: 20 sec
> >
> > So instead of worrying about making db calls non-blocking we've been working
> to make certain eventlets non-blocking - i.e. add sleep(0) calls to long
> running iteration loops - which IMO has a much bigger impact on the
> performance of the apparent latency of the system.
> 
> Yep, and I think adding a few sleep(0) calls in various places in the Nova
> codebase (as was recently added in the _sync_power_states() periodic task) is
> an easy and simple win with pretty much no ill side-effects. :)

I'd be cautious to say that no ill side-effects were introduced. I found a race 
condition right in the middle of sync_power_states, which I assume was exposed 
by "breaking" the task deliberately. 

> 
> Curious... do you have a list of all the places where sleep(0) calls were
> inserted in the HP Nova code? I can turn that into a bug report and get to
> work on adding them...
> 
> All the best,
> -jay
> 
> > Phil
> >
> >
> >
> > -Original Message-
> > From: openstack-bounces+philip.day=hp@lists.launchpad.net
> > [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On
> > Behalf Of Brian Lamar
> > Sent: 01 March 2012 21:31
> > To: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> >>> How is MySQL access handled in eventlet? Presumably it's external C
> >>> library so it's not going to be monkey patched. Does that make every
> >>> db access call a blocking call? Thanks,
> >
> >> Nope, it goes through a thread pool.
> >
> > I feel like this might be an over-simplification. If the question is:
> >
> > "How is MySQL access handled in nova?"
> >
> > The answer would be that we use SQLAlchemy which can load any number of SQL-
> drivers. These drivers can be either pure Python or C-based drivers. In the
> case

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Day, Phil
> By "properly multi-threaded" are you instead referring to making the nova-api 
> server multi-*processed* with eventlet greenthread pools in each process? 
> i.e. The way Swift (and now Glance) works? Or are you referring to a 
> different approach entirely?

Yep - following your posting in here pointing to the glance changes we 
back-ported that into the Diablo API server.   We're now running each API 
server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
happier.  The same changes were being done in parallel into Essex by someone in 
the community I thought ?

> Curious... do you have a list of all the places where sleep(0) calls were 
> inserted in the HP Nova code? I can turn that into a bug report and get to 
> work on adding them... 

So far the only two cases we've done this are in the _sync_power_state and  in 
the security group refresh handling 
(libvirt/firewall/do_refresh_security_group_rules) - which we modified to only 
refresh for instances in the group and added a sleep in the loop (I need to 
finish writing the bug report for this one).

I have contemplated doing something similar in the image code when reading 
chunks from glance - but am slightly worried that in this case the only thing 
that currently stops two creates for the same image from making separate 
requests to glance might be that one gets queued behind the other.  It would be 
nice to do the same thing on snapshot (as this can also be a real hog), but 
there the transfer is handled completely within the glance client.   A more 
radical approach would be to split out the image handling code from compute 
manager into a separate (co-hosted) image_manager so at least only commands 
which need interaction with glance will block each other.

Phil




-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Jay Pipes
Sent: 02 March 2012 15:17
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

On 03/02/2012 05:34 AM, Day, Phil wrote:
> In our experience (running clusters of several hundred nodes) the DB 
> performance is not generally the significant factor, so making its calls 
> non-blocking  gives only a very small increase in processing capacity and 
> creates other side effects in terms of slowing all eventlets down as they 
> wait for their turn to run.

Yes, I believe I said that this was the case at the last design summit
-- or rather, I believe I said "is there any evidence that the database is a 
performance or scalability problem at all"?

> That shouldn't really be surprising given that the Nova DB is pretty small 
> and MySQL is a pretty good DB - throw reasonable hardware at the DB server 
> and give it a bit of TLC from a DBA (remove deleted entries from the DB, add 
> indexes where the slow query log tells you to, etc) and it shouldn't be the 
> bottleneck in the system for performance or scalability.

++

> We use the python driver and have experimented with allowing the eventlet 
> code to make the db calls non-blocking (its not the default setting), and it 
> works, but didn't give us any significant advantage.

Yep, identical results to the work that Mark Washenberger did on the same 
subject.

> For example in the API server (before we made it properly 
> multi-threaded)

By "properly multi-threaded" are you instead referring to making the nova-api 
server multi-*processed* with eventlet greenthread pools in each process? i.e. 
The way Swift (and now Glance) works? Or are you referring to a different 
approach entirely?

 > with blocking db calls the server was essentially a serial processing queue 
 > - each request was fully processed before the next.  With non-blocking db 
 > calls we got a lot more apparent concurrencybut only at the expense of 
 > making all of the requests equally bad.

Yep, not surprising.

> Consider a request takes 10 seconds, where after 5 seconds there is a call to 
> the DB which takes 1 second, and three are started at the same time:
>
> Blocking:
> 0 - Request 1 starts
> 10 - Request 1 completes, request 2 starts
> 20 - Request 2 completes, request 3 starts
> 30 - Request 3 competes
> Request 1 completes in 10 seconds
> Request 2 completes in 20 seconds
> Request 3 completes in 30 seconds
> Ave time: 20 sec
>
> Non-blocking
> 0 - Request 1 Starts
> 5 - Request 1 gets to db call, request 2 starts
> 10 - Request 2 gets to db call, request 3 starts
> 15 - Request 3 gets to db call, request 1 resumes
> 19 - Request 1 completes, request 2 resumes
> 23 - Request 2 completes,  request 3 resumes
> 27 - Request 3 completes
>
> Request 1 completes in 19 seconds  (+ 9 seconds) Request 2 completes 
> in 24 seconds (+ 4 seconds) Request 3 completes in 27 seconds (- 3 
> seconds) Ave time: 20 sec
>
> So instead of worrying about making db calls non-blocking we've been working 
> to make certain eventlets non-blocking - i.e. add slee

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 10:52 AM, Armando Migliaccio wrote:

I'd be cautious to say that no ill side-effects were introduced. I found a race condition 
right in the middle of sync_power_states, which I assume was exposed by 
"breaking" the task deliberately.


Such a party-pooper! ;)

Got a link to the bug report for me?

Thanks!
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] RHEL 5 & 6 OpenStack image archive...

2012-03-02 Thread J. Marc Edwards
Can someone tell me where these base images are located for use on my
OpenStack deployment?
Kind regards, Marc
-- 

J. Marc Edwards
Lead Architect - Semiconductor Design Portals
Nimbis Services, Inc.
Skype: (919) 747-3775
Cell:  (919) 345-1021
Fax:   (919) 882-8602
marc.edwa...@nimbisservices.com
www.nimbisservices.com

<>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 10:54 AM, Day, Phil wrote:

By "properly multi-threaded" are you instead referring to making the nova-api 
server multi-*processed* with eventlet greenthread pools in each process? i.e. The way 
Swift (and now Glance) works? Or are you referring to a different approach entirely?


Yep - following your posting in here pointing to the glance changes we 
back-ported that into the Diablo API server.   We're now running each API 
server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
happier.


Gotcha, OK, that makes a lot of sense.

> The same changes were being done in parallel into Essex by someone in 
the community I thought ?


Hmmm, for Nova? I'm not aware of that effort, but I would certainly 
support it. It's a very big impact performance issue...



Curious... do you have a list of all the places where sleep(0) calls were 
inserted in the HP Nova code? I can turn that into a bug report and get to work 
on adding them...


So far the only two cases we've done this are in the _sync_power_state and  in 
the security group refresh handling 
(libvirt/firewall/do_refresh_security_group_rules) - which we modified to only 
refresh for instances in the group and added a sleep in the loop (I need to 
finish writing the bug report for this one).


OK, sounds good.


I have contemplated doing something similar in the image code when reading 
chunks from glance - but am slightly worried that in this case the only thing 
that currently stops two creates for the same image from making separate 
requests to glance might be that one gets queued behind the other.  It would be 
nice to do the same thing on snapshot (as this can also be a real hog), but 
there the transfer is handled completely within the glance client.   A more 
radical approach would be to split out the image handling code from compute 
manager into a separate (co-hosted) image_manager so at least only commands 
which need interaction with glance will block each other.


We should definitely discuss this further (separate ML thread or 
etherpad maybe). If not before the design summit, then definitely at it.


Cheers!
-jay


Phil




-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Jay Pipes
Sent: 02 March 2012 15:17
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

On 03/02/2012 05:34 AM, Day, Phil wrote:

In our experience (running clusters of several hundred nodes) the DB 
performance is not generally the significant factor, so making its calls 
non-blocking  gives only a very small increase in processing capacity and 
creates other side effects in terms of slowing all eventlets down as they wait 
for their turn to run.


Yes, I believe I said that this was the case at the last design summit
-- or rather, I believe I said "is there any evidence that the database is a 
performance or scalability problem at all"?


That shouldn't really be surprising given that the Nova DB is pretty small and 
MySQL is a pretty good DB - throw reasonable hardware at the DB server and give 
it a bit of TLC from a DBA (remove deleted entries from the DB, add indexes 
where the slow query log tells you to, etc) and it shouldn't be the bottleneck 
in the system for performance or scalability.


++


We use the python driver and have experimented with allowing the eventlet code 
to make the db calls non-blocking (its not the default setting), and it works, 
but didn't give us any significant advantage.


Yep, identical results to the work that Mark Washenberger did on the same 
subject.


For example in the API server (before we made it properly
multi-threaded)


By "properly multi-threaded" are you instead referring to making the nova-api 
server multi-*processed* with eventlet greenthread pools in each process? i.e. The way 
Swift (and now Glance) works? Or are you referring to a different approach entirely?

  >  with blocking db calls the server was essentially a serial processing 
queue - each request was fully processed before the next.  With non-blocking db 
calls we got a lot more apparent concurrencybut only at the expense of making all 
of the requests equally bad.

Yep, not surprising.


Consider a request takes 10 seconds, where after 5 seconds there is a call to 
the DB which takes 1 second, and three are started at the same time:

Blocking:
0 - Request 1 starts
10 - Request 1 completes, request 2 starts
20 - Request 2 completes, request 3 starts
30 - Request 3 competes
Request 1 completes in 10 seconds
Request 2 completes in 20 seconds
Request 3 completes in 30 seconds
Ave time: 20 sec

Non-blocking
0 - Request 1 Starts
5 - Request 1 gets to db call, request 2 starts
10 - Request 2 gets to db call, request 3 starts
15 - Request 3 gets to db call, request 1 resumes
19 - Request 1 completes, request 2 resumes
23 - Request 2 completes,  request 3 resumes
27 - Request 3 completes

R

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Armando Migliaccio
I knew you'd say that :P

There you go: https://bugs.launchpad.net/nova/+bug/944145

Cheers,
Armando

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 02 March 2012 16:22
> To: Armando Migliaccio
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] eventlet weirdness
> 
> On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > I'd be cautious to say that no ill side-effects were introduced. I found a
> race condition right in the middle of sync_power_states, which I assume was
> exposed by "breaking" the task deliberately.
> 
> Such a party-pooper! ;)
> 
> Got a link to the bug report for me?
> 
> Thanks!
> -jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 & 6 OpenStack image archive...

2012-03-02 Thread Edgar Magana (eperdomo)
Hi Marc,

 

I ended up creating my own RHEL 6.1 image. If you want I can share it
with you.

 

Thanks,

 

Edgar Magana

CTO Cloud Computing

 

From: openstack-bounces+eperdomo=cisco@lists.launchpad.net
[mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On
Behalf Of J. Marc Edwards
Sent: Friday, March 02, 2012 8:23 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] RHEL 5 & 6 OpenStack image archive...

 

Can someone tell me where these base images are located for use on my
OpenStack deployment?
Kind regards, Marc

-- 



J. Marc Edwards
Lead Architect - Semiconductor Design Portals
Nimbis Services, Inc.
Skype: (919) 747-3775
Cell:  (919) 345-1021
Fax:   (919) 882-8602
marc.edwa...@nimbisservices.com
www.nimbisservices.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Test Dependencies

2012-03-02 Thread Jay Pipes

On 03/01/2012 08:12 PM, Maru Newby wrote:

Is there any interest in adding unittest2 to the test dependencies for 
openstack projects?  I have found its enhanced assertions and 'with 
self.assertRaises' very useful in writing tests.  I see there have been past 
bugs that mentioned unittest2, and am wondering if the reasons for not adopting 
it still stand.


++ to unittest2. Frankly, it's a dependency of sqlalchemy, so it gets 
installed anyway during any installation. Might as well use it IMHO.



Separately, is the use of mox open to discussion?  mock was recently added as a 
dependency to quantum to perform library patching, which isn't supported by mox 
as far as I know.


This is incorrect. pymox' stubout module can be used to perform library 
patching. You can see examples of this in Glance and Nova. For example:


https://github.com/openstack/glance/blob/master/glance/tests/unit/test_swift_store.py#L55

> The ability to do non-replay mocking is another useful feature of 
mock.  I'm not suggesting that mox be replaced, but am wondering if mock 
could be an additional dependency and used when the functionality 
provided by mox falls short.


Might be easier to answer with some example code... would you mind 
pastebin'ing an example that shows what mock can do that mox can't?


Thanks much!
-jay


Thanks,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Joshua Harlow
Does anyone else feel that the following seems really "dirty", or is it just me.

"adding a few sleep(0) calls in various places in the
Nova codebase (as was recently added in the _sync_power_states()
periodic task) is an easy and simple win with pretty much no ill
side-effects. :)"

Dirty in that it feels like there is something wrong from a design point of 
view.
Sprinkling "sleep(0)" seems like its a band-aid on a larger problem imho.
But that's just my gut feeling.

:-(

On 3/2/12 8:26 AM, "Armando Migliaccio"  
wrote:

I knew you'd say that :P

There you go: https://bugs.launchpad.net/nova/+bug/944145

Cheers,
Armando

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 02 March 2012 16:22
> To: Armando Migliaccio
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] eventlet weirdness
>
> On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > I'd be cautious to say that no ill side-effects were introduced. I found a
> race condition right in the middle of sync_power_states, which I assume was
> exposed by "breaking" the task deliberately.
>
> Such a party-pooper! ;)
>
> Got a link to the bug report for me?
>
> Thanks!
> -jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Lorin Hochstein
Looks like a textbook example of a "leaky abstraction" 
 to me.

Take care,

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com


On Mar 2, 2012, at 1:35 PM, Joshua Harlow wrote:

> Does anyone else feel that the following seems really “dirty”, or is it just 
> me.
> 
> “adding a few sleep(0) calls in various places in the
> Nova codebase (as was recently added in the _sync_power_states()
> periodic task) is an easy and simple win with pretty much no ill
> side-effects. :)”
> 
> Dirty in that it feels like there is something wrong from a design point of 
> view.
> Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho. 
> But that’s just my gut feeling.
> 
> :-(
> 
> On 3/2/12 8:26 AM, "Armando Migliaccio"  
> wrote:
> 
> I knew you'd say that :P
> 
> There you go: https://bugs.launchpad.net/nova/+bug/944145
> 
> Cheers,
> Armando
> 
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 02 March 2012 16:22
> > To: Armando Migliaccio
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > I'd be cautious to say that no ill side-effects were introduced. I found a
> > race condition right in the middle of sync_power_states, which I assume was
> > exposed by "breaking" the task deliberately.
> >
> > Such a party-pooper! ;)
> >
> > Got a link to the bug report for me?
> >
> > Thanks!
> > -jay
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Chris Behrens
It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

> Does anyone else feel that the following seems really “dirty”, or is it just 
> me.
> 
> “adding a few sleep(0) calls in various places in the
> Nova codebase (as was recently added in the _sync_power_states()
> periodic task) is an easy and simple win with pretty much no ill
> side-effects. :)”
> 
> Dirty in that it feels like there is something wrong from a design point of 
> view.
> Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho. 
> But that’s just my gut feeling.
> 
> :-(
> 
> On 3/2/12 8:26 AM, "Armando Migliaccio"  
> wrote:
> 
> I knew you'd say that :P
> 
> There you go: https://bugs.launchpad.net/nova/+bug/944145
> 
> Cheers,
> Armando
> 
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 02 March 2012 16:22
> > To: Armando Migliaccio
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > I'd be cautious to say that no ill side-effects were introduced. I found a
> > race condition right in the middle of sync_power_states, which I assume was
> > exposed by "breaking" the task deliberately.
> >
> > Such a party-pooper! ;)
> >
> > Got a link to the bug report for me?
> >
> > Thanks!
> > -jay
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Eric Windisch
The problem is that unless you sleep(0), eventlet only switches context when 
you hit a file descriptor.  

As long as python coroutines are used, we should put sleep(0) where-ever it is 
expected that there will be a long-running loop where file descriptors are not 
touched. As noted elsewhere in this thread, MySQL file descriptors don't count, 
they're not coroutine friendly.

The premise is that cpus are pretty fast and get quickly from one call of a 
file descriptor to another, that the blocking of these descriptors is what a 
CPU most waits on, and this is an easy and obvious place to switch coroutines 
via monkey-patching.

That said, it shouldn't be necessary to "sprinkle" sleep(0) calls. They should 
be strategically placed, as necessary.

"race-conditions" around coroutine switching sounds more like thread-safety 
issues...  

--  
Eric Windisch


On Friday, March 2, 2012 at 1:35 PM, Joshua Harlow wrote:

> Re: [Openstack] eventlet weirdness Does anyone else feel that the following 
> seems really “dirty”, or is it just me.
>  
> “adding a few sleep(0) calls in various places in the
> Nova codebase (as was recently added in the _sync_power_states()
> periodic task) is an easy and simple win with pretty much no ill
> side-effects. :)”
>  
> Dirty in that it feels like there is something wrong from a design point of 
> view.
> Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho.  
> But that’s just my gut feeling.
>  
> :-(
>  
> On 3/2/12 8:26 AM, "Armando Migliaccio"  
> wrote:
>  
> > I knew you'd say that :P
> >  
> > There you go: https://bugs.launchpad.net/nova/+bug/944145
> >  
> > Cheers,
> > Armando
> >  
> > > -Original Message-
> > > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > > Sent: 02 March 2012 16:22
> > > To: Armando Migliaccio
> > > Cc: openstack@lists.launchpad.net
> > > Subject: Re: [Openstack] eventlet weirdness
> > >  
> > > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > > I'd be cautious to say that no ill side-effects were introduced. I 
> > > > found a
> > >  
> > > race condition right in the middle of sync_power_states, which I assume 
> > > was
> > > exposed by "breaking" the task deliberately.
> > >  
> > > Such a party-pooper! ;)
> > >  
> > > Got a link to the bug report for me?
> > >  
> > > Thanks!
> > > -jay
> >  
> >  
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help : https://help.launchpad.net/ListHelp
>  
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Day, Phil
I didn't say it was pretty - Given the choice I'd much rather have a threading 
model that really did concurrency and pre-emption all the right places, and it 
would be really cool if something managed the threads that were started so that 
is a second conflicting request was received it did some proper tidy up or 
blocking rather than just leaving the race condition to work itself out (then 
we wouldn't have to try and control it by checking vm_state).

However ...   In the current code base where we only have user space based 
eventlets, with no pre-emption, and some activities that need to be prioritised 
then forcing pre-emption with a sleep(0) seems a pretty small bit of untidy.   
And it works now without a major code refactor.

Always open to other approaches ...

Phil
 

-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Chris Behrens
Sent: 02 March 2012 19:00
To: Joshua Harlow
Cc: openstack; Chris Behrens
Subject: Re: [Openstack] eventlet weirdness

It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

> Does anyone else feel that the following seems really "dirty", or is it just 
> me.
> 
> "adding a few sleep(0) calls in various places in the Nova codebase 
> (as was recently added in the _sync_power_states() periodic task) is 
> an easy and simple win with pretty much no ill side-effects. :)"
> 
> Dirty in that it feels like there is something wrong from a design point of 
> view.
> Sprinkling "sleep(0)" seems like its a band-aid on a larger problem imho. 
> But that's just my gut feeling.
> 
> :-(
> 
> On 3/2/12 8:26 AM, "Armando Migliaccio"  
> wrote:
> 
> I knew you'd say that :P
> 
> There you go: https://bugs.launchpad.net/nova/+bug/944145
> 
> Cheers,
> Armando
> 
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 02 March 2012 16:22
> > To: Armando Migliaccio
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > I'd be cautious to say that no ill side-effects were introduced. I 
> > > found a
> > race condition right in the middle of sync_power_states, which I 
> > assume was exposed by "breaking" the task deliberately.
> >
> > Such a party-pooper! ;)
> >
> > Got a link to the bug report for me?
> >
> > Thanks!
> > -jay
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Armando Migliaccio


> -Original Message-
> From: Eric Windisch [mailto:e...@cloudscaling.com]
> Sent: 02 March 2012 19:04
> To: Joshua Harlow
> Cc: Armando Migliaccio; Jay Pipes; openstack
> Subject: Re: [Openstack] eventlet weirdness
> 
> The problem is that unless you sleep(0), eventlet only switches context when
> you hit a file descriptor.
> 
> As long as python coroutines are used, we should put sleep(0) where-ever it is
> expected that there will be a long-running loop where file descriptors are not
> touched. As noted elsewhere in this thread, MySQL file descriptors don't
> count, they're not coroutine friendly.
> 
> The premise is that cpus are pretty fast and get quickly from one call of a
> file descriptor to another, that the blocking of these descriptors is what a
> CPU most waits on, and this is an easy and obvious place to switch coroutines
> via monkey-patching.
> 
> That said, it shouldn't be necessary to "sprinkle" sleep(0) calls. They should
> be strategically placed, as necessary.

I agree, but then the whole assumption of adopting eventlet to simplify the 
programming model is hindered by the fact that one has to think harder to what 
is doing...Nova could've kept Twisted for that matter. The programming model 
would have been harder, but at least it would have been cleaner and free from 
icky patching (that's my own opinion anyway).

> 
> "race-conditions" around coroutine switching sounds more like thread-safety
> issues...
> 

Yes. There is a fine balance to be struck here: do you let potential races 
appear in your system and deal with them on a case-by-case base, or do you 
introduce mutexes and deal with potential inefficiency and/or deadlocks? I'd 
rather go with the former here.

> --
> Eric Windisch
> 
> 
> On Friday, March 2, 2012 at 1:35 PM, Joshua Harlow wrote:
> 
> > Re: [Openstack] eventlet weirdness Does anyone else feel that the following
> seems really “dirty”, or is it just me.
> >
> > “adding a few sleep(0) calls in various places in the Nova codebase
> > (as was recently added in the _sync_power_states() periodic task) is
> > an easy and simple win with pretty much no ill side-effects. :)”
> >
> > Dirty in that it feels like there is something wrong from a design point of
> view.
> > Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho.
> > But that’s just my gut feeling.
> >
> > :-(
> >
> > On 3/2/12 8:26 AM, "Armando Migliaccio" 
> wrote:
> >
> > > I knew you'd say that :P
> > >
> > > There you go: https://bugs.launchpad.net/nova/+bug/944145
> > >
> > > Cheers,
> > > Armando
> > >
> > > > -Original Message-
> > > > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > > > Sent: 02 March 2012 16:22
> > > > To: Armando Migliaccio
> > > > Cc: openstack@lists.launchpad.net
> > > > Subject: Re: [Openstack] eventlet weirdness
> > > >
> > > > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > > > I'd be cautious to say that no ill side-effects were introduced.
> > > > > I found a
> > > >
> > > > race condition right in the middle of sync_power_states, which I
> > > > assume was exposed by "breaking" the task deliberately.
> > > >
> > > > Such a party-pooper! ;)
> > > >
> > > > Got a link to the bug report for me?
> > > >
> > > > Thanks!
> > > > -jay
> > >
> > >
> > > ___
> > > Mailing list: https://launchpad.net/~openstack Post to :
> > > openstack@lists.launchpad.net Unsubscribe :
> > > https://launchpad.net/~openstack More help :
> > > https://help.launchpad.net/ListHelp
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack Post to :
> > openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
> > Unsubscribe : https://launchpad.net/~openstack More help :
> > https://help.launchpad.net/ListHelp
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Joshua Harlow
So a thought I had was that say if the design of a component forces as part of 
its design the ability to be ran with threads or with eventlet or with 
processes.

Say if u break everything up into tasks (where a task would produce some 
output/result/side-effect).
A set of tasks could complete some action (ie, create a vm).
Subtasks could be the following:
0. Validate credentials
1. Get the image
2. Call into libvirt
3. ...

These "tasks", if constructed in a way that makes them stateless, and then 
could be chained together to form an action, then that action could be given 
say to a threaded "engine" that would know how to execute those tasks with 
threads, or it could be given to an eventlet "engine" that would do the same 
with evenlet pool/greenthreads/coroutings, or with processes (and so on). This 
could be one way the design of your code abstracts that kind of execution 
(where eventlet is abstracted away from the actual work being done, instead of 
popping up in calls to sleep(0), ie the leaky abstraction).

On 3/2/12 11:08 AM, "Day, Phil"  wrote:

I didn't say it was pretty - Given the choice I'd much rather have a threading 
model that really did concurrency and pre-emption all the right places, and it 
would be really cool if something managed the threads that were started so that 
is a second conflicting request was received it did some proper tidy up or 
blocking rather than just leaving the race condition to work itself out (then 
we wouldn't have to try and control it by checking vm_state).

However ...   In the current code base where we only have user space based 
eventlets, with no pre-emption, and some activities that need to be prioritised 
then forcing pre-emption with a sleep(0) seems a pretty small bit of untidy.   
And it works now without a major code refactor.

Always open to other approaches ...

Phil


-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Chris Behrens
Sent: 02 March 2012 19:00
To: Joshua Harlow
Cc: openstack; Chris Behrens
Subject: Re: [Openstack] eventlet weirdness

It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

> Does anyone else feel that the following seems really "dirty", or is it just 
> me.
>
> "adding a few sleep(0) calls in various places in the Nova codebase
> (as was recently added in the _sync_power_states() periodic task) is
> an easy and simple win with pretty much no ill side-effects. :)"
>
> Dirty in that it feels like there is something wrong from a design point of 
> view.
> Sprinkling "sleep(0)" seems like its a band-aid on a larger problem imho.
> But that's just my gut feeling.
>
> :-(
>
> On 3/2/12 8:26 AM, "Armando Migliaccio"  
> wrote:
>
> I knew you'd say that :P
>
> There you go: https://bugs.launchpad.net/nova/+bug/944145
>
> Cheers,
> Armando
>
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 02 March 2012 16:22
> > To: Armando Migliaccio
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > I'd be cautious to say that no ill side-effects were introduced. I
> > > found a
> > race condition right in the middle of sync_power_states, which I
> > assume was exposed by "breaking" the task deliberately.
> >
> > Such a party-pooper! ;)
> >
> > Got a link to the bug report for me?
> >
> > Thanks!
> > -jay
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Andy Smith
On Fri, Mar 2, 2012 at 10:35 AM, Joshua Harlow wrote:

>  Does anyone else feel that the following seems really “dirty”, or is it
> just me.
>

Any feeling of dirtiness is just due to it being called "sleep," all you
are doing is yielding control to allow another co-routine to schedule
itself. Blocking code is still blocking code, you have to give it some
break points if you are going to run a loop that waits on something else.



>
> “adding a few sleep(0) calls in various places in the
>
> Nova codebase (as was recently added in the _sync_power_states()
> periodic task) is an easy and simple win with pretty much no ill
> side-effects. :)”
>
> Dirty in that it feels like there is something wrong from a design point
> of view.
> Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho.
> But that’s just my gut feeling.
>
> *:-(
> *
>
> On 3/2/12 8:26 AM, "Armando Migliaccio" 
> wrote:
>
> I knew you'd say that :P
>
> There you go: https://bugs.launchpad.net/nova/+bug/944145
>
> Cheers,
> Armando
>
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com ]
> > Sent: 02 March 2012 16:22
> > To: Armando Migliaccio
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > I'd be cautious to say that no ill side-effects were introduced. I
> found a
> > race condition right in the middle of sync_power_states, which I assume
> was
> > exposed by "breaking" the task deliberately.
> >
> > Such a party-pooper! ;)
> >
> > Got a link to the bug report for me?
> >
> > Thanks!
> > -jay
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Vishvananda Ishaya

On Mar 2, 2012, at 7:54 AM, Day, Phil wrote:

>> By "properly multi-threaded" are you instead referring to making the 
>> nova-api server multi-*processed* with eventlet greenthread pools in each 
>> process? i.e. The way Swift (and now Glance) works? Or are you referring to 
>> a different approach entirely?
> 
> Yep - following your posting in here pointing to the glance changes we 
> back-ported that into the Diablo API server.   We're now running each API 
> server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
> happier.  The same changes were being done in parallel into Essex by someone 
> in the community I thought ?

Can you or jay write up what this would entail in nova?  (or even ship a diff) 
Are you using multiprocessing? In general we have had issues combining 
multiprocessing and eventlet, so in our deploys we run multiple api servers on 
different ports and load balance with ha proxy. It sounds like what you have is 
working though, so it would be nice to put it in (perhaps with a flag gate) if 
possible.
> 
>> Curious... do you have a list of all the places where sleep(0) calls were 
>> inserted in the HP Nova code? I can turn that into a bug report and get to 
>> work on adding them... 
> 
> So far the only two cases we've done this are in the _sync_power_state and  
> in the security group refresh handling 
> (libvirt/firewall/do_refresh_security_group_rules) - which we modified to 
> only refresh for instances in the group and added a sleep in the loop (I need 
> to finish writing the bug report for this one).

Please do this ASAP, I would like to get that fix in.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Day, Phil
That sounds a bit over complicated to me - Having a string of tasks sounds like 
you still have to think about what the concurrency is within each step.

There is already a good abstraction around the context of each operation - they 
just (I know - big just) need to be running in something that maps to kernel 
threads rather than user space ones.

All I really want to is to allow more than one action to run at the same time.  
So if I have two requests to create a snapshot, why can't they both run at the 
same time and still allow other things to happen ? I have all these cores 
sitting in my compute node that there that could be used, but I'm still having 
to think like a punch-card programmer submitting batch jobs to the mainframe ;-)

Right now creating snapshots is pretty close to a DoS attack on a compute node.


From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
Sent: 02 March 2012 19:23
To: Day, Phil; Chris Behrens
Cc: openstack
Subject: Re: [Openstack] eventlet weirdness

So a thought I had was that say if the design of a component forces as part of 
its design the ability to be ran with threads or with eventlet or with 
processes.

Say if u break everything up into tasks (where a task would produce some 
output/result/side-effect).
A set of tasks could complete some action (ie, create a vm).
Subtasks could be the following:
0. Validate credentials
1. Get the image
2. Call into libvirt
3. ...

These "tasks", if constructed in a way that makes them stateless, and then 
could be chained together to form an action, then that action could be given 
say to a threaded "engine" that would know how to execute those tasks with 
threads, or it could be given to an eventlet "engine" that would do the same 
with evenlet pool/greenthreads/coroutings, or with processes (and so on). This 
could be one way the design of your code abstracts that kind of execution 
(where eventlet is abstracted away from the actual work being done, instead of 
popping up in calls to sleep(0), ie the leaky abstraction).

On 3/2/12 11:08 AM, "Day, Phil"  wrote:
I didn't say it was pretty - Given the choice I'd much rather have a threading 
model that really did concurrency and pre-emption all the right places, and it 
would be really cool if something managed the threads that were started so that 
is a second conflicting request was received it did some proper tidy up or 
blocking rather than just leaving the race condition to work itself out (then 
we wouldn't have to try and control it by checking vm_state).

However ...   In the current code base where we only have user space based 
eventlets, with no pre-emption, and some activities that need to be prioritised 
then forcing pre-emption with a sleep(0) seems a pretty small bit of untidy.   
And it works now without a major code refactor.

Always open to other approaches ...

Phil


-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Chris Behrens
Sent: 02 March 2012 19:00
To: Joshua Harlow
Cc: openstack; Chris Behrens
Subject: Re: [Openstack] eventlet weirdness

It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

> Does anyone else feel that the following seems really "dirty", or is it just 
> me.
>
> "adding a few sleep(0) calls in various places in the Nova codebase
> (as was recently added in the _sync_power_states() periodic task) is
> an easy and simple win with pretty much no ill side-effects. :)"
>
> Dirty in that it feels like there is something wrong from a design point of 
> view.
> Sprinkling "sleep(0)" seems like its a band-aid on a larger problem imho.
> But that's just my gut feeling.
>
> :-(
>
> On 3/2/12 8:26 AM, "Armando Migliaccio"  
> wrote:
>
> I knew you'd say that :P
>
> There you go: https://bugs.launchpad.net/nova/+bug/944145
>
> Cheers,
> Armando
>
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 02 March 2012 16:22
> > To: Armando Migliaccio
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > I'd be cautious to say that no ill side-effects were introduced. I
> > > found a
> > race condition right in the middle of sync_power_states, which I
> > assume was exposed by "breaking" the task deliberately.
> >
> > Such a party-pooper! ;)
> >
> > Got a link to the bug report for me?
> >
> > Thanks!
> > -jay
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.lau

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Johannes Erdfelt
On Fri, Mar 02, 2012, Armando Migliaccio  
wrote:
> I agree, but then the whole assumption of adopting eventlet to simplify
> the programming model is hindered by the fact that one has to think
> harder to what is doing...Nova could've kept Twisted for that matter.
> The programming model would have been harder, but at least it would
> have been cleaner and free from icky patching (that's my own opinion
> anyway).

Twisted has a much harder programming model with the same blocking
problem that eventlet has.

> Yes. There is a fine balance to be struck here: do you let potential
> races appear in your system and deal with them on a case-by-case base,
> or do you introduce mutexes and deal with potential inefficiency
> and/or deadlocks? I'd rather go with the former here.

Neither of these options are acceptable IMO.

If we want to minimize the number of bugs, we should make the task as
easy as possible on the programmer. Constantly trying to track
multiple threads of execution and what possible races that can happen
and what locking is required will end up with more bugs in the long run.

I'd priortize correct over performant. It's easier to optimize when
you're sure the code is correct than the other way around.

I'd like to see a move towards more serialization of actions. For
instance, if all operations on an instance are serialized, then there
are no opportunities to race against other operations on the same
instance.

We can loosen the restrictions when we've identified bottlenecks and
we're sure it's safe to do so.

I'm sure we'll find out that performance is still very good.

JE


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Day, Phil
Ok - I'll work with Jay on that.



-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com] 
Sent: 02 March 2012 19:27
To: Day, Phil
Cc: Jay Pipes; openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness


On Mar 2, 2012, at 7:54 AM, Day, Phil wrote:

>> By "properly multi-threaded" are you instead referring to making the 
>> nova-api server multi-*processed* with eventlet greenthread pools in each 
>> process? i.e. The way Swift (and now Glance) works? Or are you referring to 
>> a different approach entirely?
> 
> Yep - following your posting in here pointing to the glance changes we 
> back-ported that into the Diablo API server.   We're now running each API 
> server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
> happier.  The same changes were being done in parallel into Essex by someone 
> in the community I thought ?

Can you or jay write up what this would entail in nova?  (or even ship a diff) 
Are you using multiprocessing? In general we have had issues combining 
multiprocessing and eventlet, so in our deploys we run multiple api servers on 
different ports and load balance with ha proxy. It sounds like what you have is 
working though, so it would be nice to put it in (perhaps with a flag gate) if 
possible.
> 
>> Curious... do you have a list of all the places where sleep(0) calls were 
>> inserted in the HP Nova code? I can turn that into a bug report and get to 
>> work on adding them... 
> 
> So far the only two cases we've done this are in the _sync_power_state and  
> in the security group refresh handling 
> (libvirt/firewall/do_refresh_security_group_rules) - which we modified to 
> only refresh for instances in the group and added a sleep in the loop (I need 
> to finish writing the bug report for this one).

Please do this ASAP, I would like to get that fix in.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Eric Windisch
> 
> I agree, but then the whole assumption of adopting eventlet to simplify the 
> programming model is hindered by the fact that one has to think harder to 
> what is doing...Nova could've kept Twisted for that matter. The programming 
> model would have been harder, but at least it would have been cleaner and 
> free from icky patching (that's my own opinion anyway).

Then the assumption is wrong. You need to write with the premise of working 
with Eventlet. For me, eventlet has complicated the programming model by 
forcing me to a specific pattern, although I must admit this has largely been 
due to my use of a C library (libzmq).

-- 
Eric Windisch 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Duncan McGreggor
On Fri, Mar 2, 2012 at 2:40 PM, Johannes Erdfelt  wrote:
> On Fri, Mar 02, 2012, Armando Migliaccio  
> wrote:
>> I agree, but then the whole assumption of adopting eventlet to simplify
>> the programming model is hindered by the fact that one has to think
>> harder to what is doing...Nova could've kept Twisted for that matter.
>> The programming model would have been harder, but at least it would
>> have been cleaner and free from icky patching (that's my own opinion
>> anyway).
>
> Twisted has a much harder programming model with the same blocking
> problem that eventlet has.

Like so many things that are aesthetic in nature, the statement above
is misleading. Using a callback, event-based, deferred/promise
oriented system is hard for *some*. It is far, far easier for others
(myself included).

It's a matter of perception and personal preference.

It may be apropos to mention that Guido van Rossum himself has stated
that he shares the same view of concurrent programming in Python as
Glyph (the founder of Twisted):
  https://plus.google.com/115212051037621986145/posts/a9SqS7faVWC

Glyph's post, if you can't see that G+ link:
  
http://glyph.twistedmatrix.com/2012/01/concurrency-spectrum-from-callbacks-to.html

One thing to keep in mind is that with Twisted, you always have the
option of deferring to a thread for operations are not async-friendly.

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Essex-3 : Nova api calls with keystone doubt

2012-03-02 Thread Alejandro Comisario

Hi openstack list.

Sorry to ask this, but i have a strong doubt on how the "endpoint" 
config in keystone actually works when you make a nova api call (we are 
using Essex-3)


First, let me setup a use case :

user1 -> tenant1 -> zone1 (private nova endpoint)
user2 -> tenant2 -> zone2 (private nova endpoint)

So, we know that python-novaclient actually checks for a "nova" to 
exists in order to make a request, but what about nova api call directly 
? ( curl for example )
We realized that if we use the tenant1 token to query or create 
instances on zone2 is possible, and with tenant2, is possible to query 
or create instances on zone1.
And still, tenant1 token, can query and create instances over tenant2 id 
on the resource "v1.1/TENANT_ID/server"


So, if there is any, is there a way to configure keystone / nova to 
actually do, what python nova-client does regarding the sanity check 
whether there is a "nova" endpoint asociated with the tenant when 
curling the nova-api port ?
Second, how can we prevent for token from tenant1 to access resources of 
tenant2 ?


Best regards.
alejandro.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Yun Mao
Hi Phil, I'm a little confused. To what extend does sleep(0) help?

It only gives the greenlet scheduler a chance to switch to another
green thread. If we are having a CPU bound issue, sleep(0) won't give
us access to any more CPU cores. So the total time to finish should be
the same no matter what. It may improve the fairness among different
green threads but shouldn't help the throughput. I think the only
apparent gain to me is situation such that there is 1 green thread
with long CPU time and many other green threads with small CPU time.
The total finish time will be the same with or without sleep(0), but
with sleep in the first threads, the others should be much more
responsive.

However, it's unclear to me which part of Nova is very CPU intensive.
It seems that most work here is IO bound, including the snapshot. Do
we have other blocking calls besides mysql access? I feel like I'm
missing something but couldn't figure out what.

Thanks,

Yun


On Fri, Mar 2, 2012 at 2:08 PM, Day, Phil  wrote:
> I didn't say it was pretty - Given the choice I'd much rather have a 
> threading model that really did concurrency and pre-emption all the right 
> places, and it would be really cool if something managed the threads that 
> were started so that is a second conflicting request was received it did some 
> proper tidy up or blocking rather than just leaving the race condition to 
> work itself out (then we wouldn't have to try and control it by checking 
> vm_state).
>
> However ...   In the current code base where we only have user space based 
> eventlets, with no pre-emption, and some activities that need to be 
> prioritised then forcing pre-emption with a sleep(0) seems a pretty small bit 
> of untidy.   And it works now without a major code refactor.
>
> Always open to other approaches ...
>
> Phil
>
>
> -Original Message-
> From: openstack-bounces+philip.day=hp@lists.launchpad.net 
> [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
> Chris Behrens
> Sent: 02 March 2012 19:00
> To: Joshua Harlow
> Cc: openstack; Chris Behrens
> Subject: Re: [Openstack] eventlet weirdness
>
> It's not just you
>
>
> On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:
>
>> Does anyone else feel that the following seems really "dirty", or is it just 
>> me.
>>
>> "adding a few sleep(0) calls in various places in the Nova codebase
>> (as was recently added in the _sync_power_states() periodic task) is
>> an easy and simple win with pretty much no ill side-effects. :)"
>>
>> Dirty in that it feels like there is something wrong from a design point of 
>> view.
>> Sprinkling "sleep(0)" seems like its a band-aid on a larger problem imho.
>> But that's just my gut feeling.
>>
>> :-(
>>
>> On 3/2/12 8:26 AM, "Armando Migliaccio"  
>> wrote:
>>
>> I knew you'd say that :P
>>
>> There you go: https://bugs.launchpad.net/nova/+bug/944145
>>
>> Cheers,
>> Armando
>>
>> > -Original Message-
>> > From: Jay Pipes [mailto:jaypi...@gmail.com]
>> > Sent: 02 March 2012 16:22
>> > To: Armando Migliaccio
>> > Cc: openstack@lists.launchpad.net
>> > Subject: Re: [Openstack] eventlet weirdness
>> >
>> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
>> > > I'd be cautious to say that no ill side-effects were introduced. I
>> > > found a
>> > race condition right in the middle of sync_power_states, which I
>> > assume was exposed by "breaking" the task deliberately.
>> >
>> > Such a party-pooper! ;)
>> >
>> > Got a link to the bug report for me?
>> >
>> > Thanks!
>> > -jay
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Caitlin Bestler
Duncan McGregor wrote:


>Like so many things that are aesthetic in nature, the statement above is 
>misleading. Using a callback, event-based, deferred/promise oriented system is 
>hard for *some*. It is far, far easier for >others (myself included).

>It's a matter of perception and personal preference.

I would also agree that coding your application as a series of responses to 
events can produce code that is easier to understand and debug.
And that would be a wonderful discussion if we were starting a new project.

But I hope that nobody is suggesting that we rewrite all of OpenStack code away 
from eventlet pseudo-threading after the fact.
Personally I think it was the wrong decision, but that ship has already sailed.

With event-response coding it is obvious that you have to partition any one 
response into segments that do not take so long to
Execute that they are blocking other events. That remains true when you hide 
your event-driven model with eventlet pseudo-threading.
Inserting sleep(0) calls is the most obvious way to break up an overly event 
handler, given that you've already decided to obfuscate the
Code to pretend that it is a thread.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Duncan McGreggor
On Fri, Mar 2, 2012 at 3:38 PM, Caitlin Bestler
 wrote:
> Duncan McGregor wrote:
>
>>Like so many things that are aesthetic in nature, the statement above is 
>>misleading. Using a callback, event-based, deferred/promise oriented system 
>>is hard for *some*. It is far, far easier for >others (myself included).
>
>>It's a matter of perception and personal preference.
>
> I would also agree that coding your application as a series of responses to 
> events can produce code that is easier to understand and debug.
> And that would be a wonderful discussion if we were starting a new project.
>
> But I hope that nobody is suggesting that we rewrite all of OpenStack code 
> away from eventlet pseudo-threading after the fact.
> Personally I think it was the wrong decision, but that ship has already 
> sailed.

Agreed.

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Johannes Erdfelt
On Fri, Mar 02, 2012, Duncan McGreggor  wrote:
> On Fri, Mar 2, 2012 at 2:40 PM, Johannes Erdfelt  wrote:
> > Twisted has a much harder programming model with the same blocking
> > problem that eventlet has.
> 
> Like so many things that are aesthetic in nature, the statement above
> is misleading. Using a callback, event-based, deferred/promise
> oriented system is hard for *some*. It is far, far easier for others
> (myself included).
> 
> It's a matter of perception and personal preference.
>
> It may be apropos to mention that Guido van Rossum himself has stated
> that he shares the same view of concurrent programming in Python as
> Glyph (the founder of Twisted):
>   https://plus.google.com/115212051037621986145/posts/a9SqS7faVWC
> 
> Glyph's post, if you can't see that G+ link:
>   
> http://glyph.twistedmatrix.com/2012/01/concurrency-spectrum-from-callbacks-to.html
> 
> One thing to keep in mind is that with Twisted, you always have the
> option of deferring to a thread for operations are not async-friendly.

It's a shame that post chooses to ignore eventlet-style concurrency. It
has all of the benefits of being almost as clear where concurrency can
occur without needing a macro key to constantly output 'yield'.

It also integrates with other python libraries better (but obviously not
perfectly).

Using coroutines for concurrency is anti-social programming. It excludes
a whole suite of libraries merely because they didn't conform to your
programming model.

However, this is the wrong discussion to be having. Concurrency isn't
the problem we should be worried about, it's isolation. If we can
sufficiently isolate the work that each daemon needs to do, then
concurrency is trivial. In the best case, they can be separate processes
and we don't need to worry about a programming model. If we're not being
too optimistic then threads with minimal locking is most likely.

JE


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 02:27 PM, Vishvananda Ishaya wrote:


On Mar 2, 2012, at 7:54 AM, Day, Phil wrote:


By "properly multi-threaded" are you instead referring to making the nova-api 
server multi-*processed* with eventlet greenthread pools in each process? i.e. The way 
Swift (and now Glance) works? Or are you referring to a different approach entirely?


Yep - following your posting in here pointing to the glance changes we 
back-ported that into the Diablo API server.   We're now running each API 
server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
happier.  The same changes were being done in parallel into Essex by someone in 
the community I thought ?


Can you or jay write up what this would entail in nova?  (or even ship a diff) 
Are you using multiprocessing? In general we have had issues combining 
multiprocessing and eventlet, so in our deploys we run multiple api servers on 
different ports and load balance with ha proxy. It sounds like what you have is 
working though, so it would be nice to put it in (perhaps with a flag gate) if 
possible.


We are not using multiprocessing, no.

We simply start multiple worker processes listening on the same socket, 
with each worker process having an eventlet greenthread pool.


You can see the code (taken from Swift and adapted by Chris Behrens and 
Brian Waldon to use the object-oriented Server approach that 
Glance/Keystone/Nova uses) here:


https://github.com/openstack/glance/blob/master/glance/common/wsgi.py

There is a worker = XXX configuration option that controls the number of 
worker processes created on server startup. A worker value of 0 
indicates to run identically to the way Nova currently runs (one process 
with an eventlet pool of greenthreads)


Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 01:35 PM, Joshua Harlow wrote:

Does anyone else feel that the following seems really “dirty”, or is it
just me.

“adding a few sleep(0) calls in various places in the
Nova codebase (as was recently added in the _sync_power_states()
periodic task) is an easy and simple win with pretty much no ill
side-effects. :)”

Dirty in that it feels like there is something wrong from a design point
of view.
Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho.
But that’s just my gut feeling.


It's not really all that dirty, IMHO. You just have to think of 
greenlet.sleep(0) as manually yielding control back to eventlet...


Like Phil said, in the absence of a non-userspace threading model and 
thread scheduler, there's not a whole lot else one can do other than be 
mindful of what functions/methods may run for long periods of time 
and/or block I/O and call sleep(0) in those scenarios where it makes 
sense to yield a timeslice back to other processes.


While it's true that eventlet (and to an extent Twisted) mask some of 
the complexities involved in non-blocking I/O in a threaded(-like) 
application programming model, I don't think there will be an 
eventlet-that-knows-what-methods-should-yield-and-which-should-be-prioritized 
library any time soon.


-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 03:38 PM, Caitlin Bestler wrote:

Duncan McGregor wrote:

Like so many things that are aesthetic in nature, the statement above is 
misleading. Using a callback, event-based, deferred/promise oriented system is 
hard for *some*. It is far, far easier for>others (myself included).



It's a matter of perception and personal preference.


I would also agree that coding your application as a series of responses to 
events can produce code that is easier to understand and debug.
And that would be a wonderful discussion if we were starting a new project.

But I hope that nobody is suggesting that we rewrite all of OpenStack code away 
from eventlet pseudo-threading after the fact.
Personally I think it was the wrong decision, but that ship has already sailed.


Yep, that ship has sailed more than 12 months ago.


With event-response coding it is obvious that you have to partition any one 
response into segments that do not take so long to
Execute that they are blocking other events. That remains true when you hide 
your event-driven model with eventlet pseudo-threading.
Inserting sleep(0) calls is the most obvious way to break up an overly event 
handler, given that you've already decided to obfuscate the
Code to pretend that it is a thread.


I assume you meant "an overly greedy event handler" above?

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 & 6 OpenStack image archive...

2012-03-02 Thread David Busby
We currently have openstack available in EPEL testing repo, help in testing is 
appreciated.

Sent from my iPhone

On 2 Mar 2012, at 18:13, "Edgar Magana (eperdomo)"  wrote:

> Hi Marc,
>  
> I ended up creating my own RHEL 6.1 image. If you want I can share it with 
> you.
>  
> Thanks,
>  
> Edgar Magana
> CTO Cloud Computing
>  
> From: openstack-bounces+eperdomo=cisco@lists.launchpad.net 
> [mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On Behalf 
> Of J. Marc Edwards
> Sent: Friday, March 02, 2012 8:23 AM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] RHEL 5 & 6 OpenStack image archive...
>  
> Can someone tell me where these base images are located for use on my 
> OpenStack deployment?
> Kind regards, Marc
> -- 
> 
> J. Marc Edwards
> Lead Architect - Semiconductor Design Portals
> Nimbis Services, Inc.
> Skype: (919) 747-3775
> Cell:  (919) 345-1021
> Fax:   (919) 882-8602
> marc.edwa...@nimbisservices.com
> www.nimbisservices.com
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Monsyne Dragon

On Mar 2, 2012, at 9:17 AM, Jay Pipes wrote:

> On 03/02/2012 05:34 AM, Day, Phil wrote:
>> In our experience (running clusters of several hundred nodes) the DB 
>> performance is not generally the significant factor, so making its calls 
>> non-blocking  gives only a very small increase in processing capacity and 
>> creates other side effects in terms of slowing all eventlets down as they 
>> wait for their turn to run.
> 
> Yes, I believe I said that this was the case at the last design summit -- or 
> rather, I believe I said "is there any evidence that the database is a 
> performance or scalability problem at all"?
> 
>> That shouldn't really be surprising given that the Nova DB is pretty small 
>> and MySQL is a pretty good DB - throw reasonable hardware at the DB server 
>> and give it a bit of TLC from a DBA (remove deleted entries from the DB, add 
>> indexes where the slow query log tells you to, etc) and it shouldn't be the 
>> bottleneck in the system for performance or scalability.
> 
> ++
> 
>> We use the python driver and have experimented with allowing the eventlet 
>> code to make the db calls non-blocking (its not the default setting), and it 
>> works, but didn't give us any significant advantage.
> 
> Yep, identical results to the work that Mark Washenberger did on the same 
> subject.
> 

Has anyone thought about switching to gevent?   It's similar enough to eventlet 
that the port shouldn't be too bad, and because it's event loop is in C, 
(libevent), there are C mysql drivers (ultramysql) that will work with it 
without blocking.   



>> For example in the API server (before we made it properly multi-threaded)
> 
> By "properly multi-threaded" are you instead referring to making the nova-api 
> server multi-*processed* with eventlet greenthread pools in each process? 
> i.e. The way Swift (and now Glance) works? Or are you referring to a 
> different approach entirely?
> 
> > with blocking db calls the server was essentially a serial processing queue 
> > - each request was fully processed before the next.  With non-blocking db 
> > calls we got a lot more apparent concurrencybut only at the expense of 
> > making all of the requests equally bad.
> 
> Yep, not surprising.
> 
>> Consider a request takes 10 seconds, where after 5 seconds there is a call 
>> to the DB which takes 1 second, and three are started at the same time:
>> 
>> Blocking:
>> 0 - Request 1 starts
>> 10 - Request 1 completes, request 2 starts
>> 20 - Request 2 completes, request 3 starts
>> 30 - Request 3 competes
>> Request 1 completes in 10 seconds
>> Request 2 completes in 20 seconds
>> Request 3 completes in 30 seconds
>> Ave time: 20 sec
>> 
>> Non-blocking
>> 0 - Request 1 Starts
>> 5 - Request 1 gets to db call, request 2 starts
>> 10 - Request 2 gets to db call, request 3 starts
>> 15 - Request 3 gets to db call, request 1 resumes
>> 19 - Request 1 completes, request 2 resumes
>> 23 - Request 2 completes,  request 3 resumes
>> 27 - Request 3 completes
>> 
>> Request 1 completes in 19 seconds  (+ 9 seconds)
>> Request 2 completes in 24 seconds (+ 4 seconds)
>> Request 3 completes in 27 seconds (- 3 seconds)
>> Ave time: 20 sec
>> 
>> So instead of worrying about making db calls non-blocking we've been working 
>> to make certain eventlets non-blocking - i.e. add sleep(0) calls to long 
>> running iteration loops - which IMO has a much bigger impact on the 
>> performance of the apparent latency of the system.
> 
> Yep, and I think adding a few sleep(0) calls in various places in the Nova 
> codebase (as was recently added in the _sync_power_states() periodic task) is 
> an easy and simple win with pretty much no ill side-effects. :)
> 
> Curious... do you have a list of all the places where sleep(0) calls were 
> inserted in the HP Nova code? I can turn that into a bug report and get to 
> work on adding them...
> 
> All the best,
> -jay
> 
>> Phil
>> 
>> 
>> 
>> -Original Message-
>> From: openstack-bounces+philip.day=hp@lists.launchpad.net 
>> [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf 
>> Of Brian Lamar
>> Sent: 01 March 2012 21:31
>> To: openstack@lists.launchpad.net
>> Subject: Re: [Openstack] eventlet weirdness
>> 
 How is MySQL access handled in eventlet? Presumably it's external C
 library so it's not going to be monkey patched. Does that make every
 db access call a blocking call? Thanks,
>> 
>>> Nope, it goes through a thread pool.
>> 
>> I feel like this might be an over-simplification. If the question is:
>> 
>> "How is MySQL access handled in nova?"
>> 
>> The answer would be that we use SQLAlchemy which can load any number of 
>> SQL-drivers. These drivers can be either pure Python or C-based drivers. In 
>> the case of pure Python drivers, monkey patching can occur and db calls are 
>> non-blocking. In the case of drivers which contain C code (or perhaps other 
>> blocking calls), db calls will most likely be blocking.
>> 
>> If the qu

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 04:10 PM, Monsyne Dragon wrote:

Has anyone thought about switching to gevent?   It's similar enough to eventlet 
that the port shouldn't be too bad, and because it's event loop is in C, 
(libevent), there are C mysql drivers (ultramysql) that will work with it 
without blocking.


Yep, I've thought about doing an experimental branch in Glance to see if 
there's a decent performance benefit. Just got stymied by that damn 24 
hour limit in a day :(


Damn ratelimiting.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Joshua Harlow
It could be over-complicated (ie an example), but its a design that lets the 
program think in what tasks need to be accomplished and how to order those 
tasks and not have to think about how those tasks are actually ran (or 
hopefully even what concurrency occurs). Ideally there should be no concurrency 
in each step, that's the whole point of having individual steps :-) A step 
itself shouldn't be concurrent, but the overall  "action" should/could be and 
you leave it up to the "engine" to decide how to run those set of steps. *Just 
my thought*...

On 3/2/12 11:38 AM, "Day, Phil"  wrote:

That sounds a bit over complicated to me - Having a string of tasks sounds like 
you still have to think about what the concurrency is within each step.

There is already a good abstraction around the context of each operation - they 
just (I know - big just) need to be running in something that maps to kernel 
threads rather than user space ones.

All I really want to is to allow more than one action to run at the same time.  
So if I have two requests to create a snapshot, why can't they both run at the 
same time and still allow other things to happen ? I have all these cores 
sitting in my compute node that there that could be used, but I'm still having 
to think like a punch-card programmer submitting batch jobs to the mainframe ;-)

Right now creating snapshots is pretty close to a DoS attack on a compute node.



From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
Sent: 02 March 2012 19:23
To: Day, Phil; Chris Behrens
Cc: openstack
Subject: Re: [Openstack] eventlet weirdness

So a thought I had was that say if the design of a component forces as part of 
its design the ability to be ran with threads or with eventlet or with 
processes.

Say if u break everything up into tasks (where a task would produce some 
output/result/side-effect).
A set of tasks could complete some action (ie, create a vm).
Subtasks could be the following:
0. Validate credentials
1. Get the image
2. Call into libvirt
3. ...

These "tasks", if constructed in a way that makes them stateless, and then 
could be chained together to form an action, then that action could be given 
say to a threaded "engine" that would know how to execute those tasks with 
threads, or it could be given to an eventlet "engine" that would do the same 
with evenlet pool/greenthreads/coroutings, or with processes (and so on). This 
could be one way the design of your code abstracts that kind of execution 
(where eventlet is abstracted away from the actual work being done, instead of 
popping up in calls to sleep(0), ie the leaky abstraction).

On 3/2/12 11:08 AM, "Day, Phil"  wrote:
I didn't say it was pretty - Given the choice I'd much rather have a threading 
model that really did concurrency and pre-emption all the right places, and it 
would be really cool if something managed the threads that were started so that 
is a second conflicting request was received it did some proper tidy up or 
blocking rather than just leaving the race condition to work itself out (then 
we wouldn't have to try and control it by checking vm_state).

However ...   In the current code base where we only have user space based 
eventlets, with no pre-emption, and some activities that need to be prioritised 
then forcing pre-emption with a sleep(0) seems a pretty small bit of untidy.   
And it works now without a major code refactor.

Always open to other approaches ...

Phil


-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Chris Behrens
Sent: 02 March 2012 19:00
To: Joshua Harlow
Cc: openstack; Chris Behrens
Subject: Re: [Openstack] eventlet weirdness

It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

> Does anyone else feel that the following seems really "dirty", or is it just 
> me.
>
> "adding a few sleep(0) calls in various places in the Nova codebase
> (as was recently added in the _sync_power_states() periodic task) is
> an easy and simple win with pretty much no ill side-effects. :)"
>
> Dirty in that it feels like there is something wrong from a design point of 
> view.
> Sprinkling "sleep(0)" seems like its a band-aid on a larger problem imho.
> But that's just my gut feeling.
>
> :-(
>
> On 3/2/12 8:26 AM, "Armando Migliaccio"  
> wrote:
>
> I knew you'd say that :P
>
> There you go: https://bugs.launchpad.net/nova/+bug/944145
>
> Cheers,
> Armando
>
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 02 March 2012 16:22
> > To: Armando Migliaccio
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] eventlet weirdness
> >
> > On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
> > > I'd be cautious to say that no ill side-effects were introduced. I
> > > found a
> > race condition right in the middle of sync_power_states, which I
> > assume was exposed by "breaking" t

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Joshua Harlow
Why has the ship-sailed?
This is software we are talking about right, there is always a v2 (X-1)
;)

On 3/2/12 12:38 PM, "Caitlin Bestler"  wrote:

Duncan McGregor wrote:


>Like so many things that are aesthetic in nature, the statement above is 
>misleading. Using a callback, event-based, deferred/promise oriented system is 
>hard for *some*. It is far, far easier for >others (myself included).

>It's a matter of perception and personal preference.

I would also agree that coding your application as a series of responses to 
events can produce code that is easier to understand and debug.
And that would be a wonderful discussion if we were starting a new project.

But I hope that nobody is suggesting that we rewrite all of OpenStack code away 
from eventlet pseudo-threading after the fact.
Personally I think it was the wrong decision, but that ship has already sailed.

With event-response coding it is obvious that you have to partition any one 
response into segments that do not take so long to
Execute that they are blocking other events. That remains true when you hide 
your event-driven model with eventlet pseudo-threading.
Inserting sleep(0) calls is the most obvious way to break up an overly event 
handler, given that you've already decided to obfuscate the
Code to pretend that it is a thread.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Duncan McGreggor
On Fri, Mar 2, 2012 at 4:10 PM, Monsyne Dragon  wrote:
>
> On Mar 2, 2012, at 9:17 AM, Jay Pipes wrote:
>
>> On 03/02/2012 05:34 AM, Day, Phil wrote:
>>> In our experience (running clusters of several hundred nodes) the DB 
>>> performance is not generally the significant factor, so making its calls 
>>> non-blocking  gives only a very small increase in processing capacity and 
>>> creates other side effects in terms of slowing all eventlets down as they 
>>> wait for their turn to run.
>>
>> Yes, I believe I said that this was the case at the last design summit -- or 
>> rather, I believe I said "is there any evidence that the database is a 
>> performance or scalability problem at all"?
>>
>>> That shouldn't really be surprising given that the Nova DB is pretty small 
>>> and MySQL is a pretty good DB - throw reasonable hardware at the DB server 
>>> and give it a bit of TLC from a DBA (remove deleted entries from the DB, 
>>> add indexes where the slow query log tells you to, etc) and it shouldn't be 
>>> the bottleneck in the system for performance or scalability.
>>
>> ++
>>
>>> We use the python driver and have experimented with allowing the eventlet 
>>> code to make the db calls non-blocking (its not the default setting), and 
>>> it works, but didn't give us any significant advantage.
>>
>> Yep, identical results to the work that Mark Washenberger did on the same 
>> subject.
>>
>
> Has anyone thought about switching to gevent?   It's similar enough to 
> eventlet that the port shouldn't be too bad, and because it's event loop is 
> in C, (libevent), there are C mysql drivers (ultramysql) that will work with 
> it without blocking.

We've been exploring this possibility at DreamHost, and chatted with
some other stackers about it at various meat-space venues. Fwiw, it's
something we'd be very interested in supporting (starting with as much
test coverage as possible of eventlet's current use in OpenStack, to
ensure as pain-free a transition as possible).

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 & 6 OpenStack image archive...

2012-03-02 Thread John Paul Walters
David,

We're currently in the process of building Essex-3/Essex-4 RPMs locally at 
USC/ISI for our heterogeneous Openstack builds.  When I looked at the EPEL 
testing repo, it looked like the packages that are currently available are 
right around Essex-1.  Are there any plans to update to the more recent 
versions?  Perhaps we could collaborate.

best,
JP



On Mar 2, 2012, at 4:05 PM, David Busby wrote:

> We currently have openstack available in EPEL testing repo, help in testing 
> is appreciated.
> 
> Sent from my iPhone
> 
> On 2 Mar 2012, at 18:13, "Edgar Magana (eperdomo)"  wrote:
> 
>> Hi Marc,
>>  
>> I ended up creating my own RHEL 6.1 image. If you want I can share it with 
>> you.
>>  
>> Thanks,
>>  
>> Edgar Magana
>> CTO Cloud Computing
>>  
>> From: openstack-bounces+eperdomo=cisco@lists.launchpad.net 
>> [mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On Behalf 
>> Of J. Marc Edwards
>> Sent: Friday, March 02, 2012 8:23 AM
>> To: openstack@lists.launchpad.net
>> Subject: [Openstack] RHEL 5 & 6 OpenStack image archive...
>>  
>> Can someone tell me where these base images are located for use on my 
>> OpenStack deployment?
>> Kind regards, Marc
>> -- 
>> 
>> J. Marc Edwards
>> Lead Architect - Semiconductor Design Portals
>> Nimbis Services, Inc.
>> Skype: (919) 747-3775
>> Cell:  (919) 345-1021
>> Fax:   (919) 882-8602
>> marc.edwa...@nimbisservices.com
>> www.nimbisservices.com
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Doc Day March 6th - the plan the plan

2012-03-02 Thread Anne Gentle
Hi all -

Please consider this your invitation to take the day on Tuesday to
work on docs, for any audience, but with special attention to the top
priority gaps for Essex.

I've started an Etherpad to work through the priorities list for Doc
Day, feel free to edit as you see fit.
http://etherpad.openstack.org/DocDayMarch6

Also the doc bugs list is ready for the day, pick anything up on here
that's not "In progress" and go to town.
https://bugs.launchpad.net/openstack-manuals

I'd be happy to walk through doc processes the morning of the Doc Day,
and I'll be on IRC all day to answer questions. I'll be in an
#openstack-docday channel for the day.

In person, I'll be ensconced in the lovely Rackspace San Francisco
offices. You can RSVP at
http://www.meetup.com/openstack/events/52466462/ if you want to join
us during the day. Thanks to all the interest and discussion already!

Looking forward to another great doc day. I'm feeling especially good
about the state of this release and appreciative of all the hard work
so far.

Thanks,
Anne

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 & 6 OpenStack image archive...

2012-03-02 Thread Edgar Magana (eperdomo)
Hi David and John,

 

I am not sure if we are talking about the same, It seems that Marc needs
base RedHat images to be used on his OpenStack deployment.

It seems that you are talking on some kind of RedHat distribution with
OpenStack already installed, if not please clarify and let us know what
the EPEL repo link is  :-)

 

Thanks,

 

Edgar

 

From: John Paul Walters [mailto:jwalt...@isi.edu] 
Sent: Friday, March 02, 2012 2:30 PM
To: David Busby
Cc: Edgar Magana (eperdomo); 
Subject: Re: [Openstack] RHEL 5 & 6 OpenStack image archive...

 

David,

 

We're currently in the process of building Essex-3/Essex-4 RPMs locally
at USC/ISI for our heterogeneous Openstack builds.  When I looked at the
EPEL testing repo, it looked like the packages that are currently
available are right around Essex-1.  Are there any plans to update to
the more recent versions?  Perhaps we could collaborate.

 

best,

JP

 

 

 

On Mar 2, 2012, at 4:05 PM, David Busby wrote:





We currently have openstack available in EPEL testing repo, help in
testing is appreciated.

Sent from my iPhone


On 2 Mar 2012, at 18:13, "Edgar Magana (eperdomo)" 
wrote:

Hi Marc,

 

I ended up creating my own RHEL 6.1 image. If you want I can
share it with you.

 

Thanks,

 

Edgar Magana

CTO Cloud Computing

 

From: openstack-bounces+eperdomo=cisco@lists.launchpad.net
[mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On
Behalf Of J. Marc Edwards
Sent: Friday, March 02, 2012 8:23 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] RHEL 5 & 6 OpenStack image archive...

 

Can someone tell me where these base images are located for use
on my OpenStack deployment?
Kind regards, Marc

-- 




J. Marc Edwards
Lead Architect - Semiconductor Design Portals
Nimbis Services, Inc.
Skype: (919) 747-3775
Cell:  (919) 345-1021
Fax:   (919) 882-8602
marc.edwa...@nimbisservices.com
www.nimbisservices.com  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 & 6 OpenStack image archive...

2012-03-02 Thread Russell Bryant
On 03/02/2012 05:30 PM, John Paul Walters wrote:
> David,
> 
> We're currently in the process of building Essex-3/Essex-4 RPMs locally
> at USC/ISI for our heterogeneous Openstack builds.  When I looked at the
> EPEL testing repo, it looked like the packages that are currently
> available are right around Essex-1.  Are there any plans to update to
> the more recent versions?  Perhaps we could collaborate.

The EPEL packages are currently based on the stable/diablo branch.  We
will be updating them to Essex at some point (presumably after it has
been released).  We have already been packaging the latest Essex
milestones for Fedora 17, though.

-- 
Russell Bryant

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Community Newsletter –March 2, 2012

2012-03-02 Thread Stefano Maffulli
OpenStack Community Newsletter –March 2, 2012

HIGHLIGHTS


  * OpenStack Governance Elections Spring 2012: remember to vote if
you have received the ballot. Results will be announced tomorrow
with a blog post. At 2:00pm PST this is the voter’s turnout:
  * PPB: 104 of 208
  * Keystone PTL: 46 of 75
  * Horizon PTL: 27 of 42
  * Glance PTL: 36 of 56
  * Nova PTL: 105 of 188
  * Rackspace announced the beta version of Cloud Servers powered by
OpenStack

http://www.rackspace.com/blog/rackspace-cloud-servers-powered-by-openstack-beta/
  * Lots of new jobs posted on http://openstack.org/community/jobs/
  * The road to Essex release (a.k.a. what is RC1?)
https://lists.launchpad.net/openstack/msg08223.html
  *  Essex-4 milestone available for Keystone, Glance, Nova and
Horizon https://lists.launchpad.net/openstack/msg08173.html
  * and Quantum
https://lists.launchpad.net/openstack/msg08197.html


EVENTS


  * TryStack.org – a Sandbox for OpenStack! Mar 06, 2012 – San
Francisco, CA http://www.meetup.com/openstack/events/52464312/
  * OpenStack Essex Doc Day Mar 06, 2012 – San Francisco, CA
http://www.meetup.com/openstack/events/52466462/
  * OpenStack Spring 2012 Design Summit Apr 16 – 18 and Conference
Apr 19-20 – San Francisco, California
http://openstack.org/conference/


OTHER NEWS


  * New OpenStack group in Atlanta
https://lists.launchpad.net/openstack/msg08090.html
  * Do you know other groups around the world? Add it to
http://wiki.openstack.org/OpenStackUsersGroup and join
https://launchpad.net/~openstack-community
  * OpenStack Wiki Recent Changes –
http://wiki.openstack.org/RecentChanges
  * XenServer Development
http://wiki.openstack.org/XenServerDevelopment
  * Hypervisor Support Matrix
http://wiki.openstack.org/HypervisorSupportMatrix
  * How to contribute to OpenStack
http://wiki.openstack.org/HowToContribute
  * OpenStack at PyCon 2012
http://wiki.openstack.org/Sprints/PyCon2012
  * Project meeting summary

http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-02-28-21.02.html


COMMUNITY STATISTICS


  *  Activity on the main branch of OpenStack repositories, lines of
code added and removed per developer during week 7 of 2012 (from
Mon Feb 20 00:00:00 UTC 2012 to Mon Feb 27 00:00:00 UTC 2012)




Changes to Tempest project - week 08 2012
Changes to Tempest project – week 08 2012 
Changes to Horizon project - week 08 2012
Changes to Horizon project – week 08 2012 
Changes to Swift project - week 08 2012
Changes to Swift project – week 08 2012 




Changes to Manuals project - week 08 2012
Changes to Manuals project – week 08 2012 
Changes to Keystone project - week 08 2012
Changes to Keystone project – week 08 2012 
Changes to Quantum project - week 08 2012
Changes to Quantum project – week 08 2012 




Changes to Glance project - week 08 2012
Changes to Glance project – week 08 2012 
Changes to Nova project - week 08 2012
Changes to Nova project – week 08 2012 



This weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Vishvananda Ishaya

On Mar 2, 2012, at 12:50 PM, Jay Pipes wrote:
> 
> We are not using multiprocessing, no.
> 
> We simply start multiple worker processes listening on the same socket, with 
> each worker process having an eventlet greenthread pool.
> 
> You can see the code (taken from Swift and adapted by Chris Behrens and Brian 
> Waldon to use the object-oriented Server approach that Glance/Keystone/Nova 
> uses) here:
> 
> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
> 
> There is a worker = XXX configuration option that controls the number of 
> worker processes created on server startup. A worker value of 0 indicates to 
> run identically to the way Nova currently runs (one process with an eventlet 
> pool of greenthreads)

This would be excellent to add to nova as an option for performance reasons.  
Especially since you can fallback to the 0 version. I'm always concerned with 
mixing threading and eventlet as it leads to really odd bugs, but it sounds 
like HP has vetted it.  If we keep 0 as the default I don't see any reason why 
it couldn't be added.

Vish___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Vishvananda Ishaya

On Mar 2, 2012, at 2:11 PM, Duncan McGreggor wrote:

> On Fri, Mar 2, 2012 at 4:10 PM, Monsyne Dragon  wrote:
>> 
>> 
>> Has anyone thought about switching to gevent?   It's similar enough to 
>> eventlet that the port shouldn't be too bad, and because it's event loop is 
>> in C, (libevent), there are C mysql drivers (ultramysql) that will work with 
>> it without blocking.
> 
> We've been exploring this possibility at DreamHost, and chatted with
> some other stackers about it at various meat-space venues. Fwiw, it's
> something we'd be very interested in supporting (starting with as much
> test coverage as possible of eventlet's current use in OpenStack, to
> ensure as pain-free a transition as possible).
> 
> d

I would be for an experimental try at this.  Based on the experience of 
starting with twisted and moving to eventlet, I can almost guarantee that we 
will run into a new set of issues.  Concurrency is difficult no matter which 
method/library you use and each change brings a new set of challenges.

That said, gevent is similar enough to eventlet that I think we will at least 
be dealing with the same class of problems, so it might be less painful than 
moving to something totally different like threads, multiprocessing, or (back 
to) twisted. If there were significant performance benefits to switching, it 
would be worth exploring.

I wouldn't want to devote a huge amount of time to this unless we see a 
significant reason to switch, so hopefully Jay gets around to testing it out.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Yun Mao
First I agree that having blocking DB calls is no big deal given the
way Nova uses mysql and reasonably powerful db server hardware.

However I'd like to point out that the math below is misleading (the
average time for the nonblocking case is also miscalculated but it's
not my point). The number that matters more in real life is
throughput. For the blocking case it's 3/30 = 0.1 request per second.
For the non-blocking case it's 3/27=0.11 requests per second. That
means if there is a request coming in every 9 seconds constantly, the
blocking system will eventually explode but the nonblocking system can
still handle it. Therefore, the non-blocking one should be preferred.
Thanks,

Yun

>
> For example in the API server (before we made it properly multi-threaded) 
> with blocking db calls the server was essentially a serial processing queue - 
> each request was fully processed before the next.  With non-blocking db calls 
> we got a lot more apparent concurrencybut only at the expense of making all 
> of the requests equally bad.
>
> Consider a request takes 10 seconds, where after 5 seconds there is a call to 
> the DB which takes 1 second, and three are started at the same time:
>
> Blocking:
> 0 - Request 1 starts
> 10 - Request 1 completes, request 2 starts
> 20 - Request 2 completes, request 3 starts
> 30 - Request 3 competes
> Request 1 completes in 10 seconds
> Request 2 completes in 20 seconds
> Request 3 completes in 30 seconds
> Ave time: 20 sec
>
>
> Non-blocking
> 0 - Request 1 Starts
> 5 - Request 1 gets to db call, request 2 starts
> 10 - Request 2 gets to db call, request 3 starts
> 15 - Request 3 gets to db call, request 1 resumes
> 19 - Request 1 completes, request 2 resumes
> 23 - Request 2 completes,  request 3 resumes
> 27 - Request 3 completes
>
> Request 1 completes in 19 seconds  (+ 9 seconds)
> Request 2 completes in 24 seconds (+ 4 seconds)
> Request 3 completes in 27 seconds (- 3 seconds)
> Ave time: 20 sec
>
> So instead of worrying about making db calls non-blocking we've been working 
> to make certain eventlets non-blocking - i.e. add sleep(0) calls to long 
> running iteration loops - which IMO has a much bigger impact on the 
> performance of the apparent latency of the system. Thanks for the 
> explanation. Let me see if I understand this.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Kapil Thangavelu
Excerpts from Monsyne Dragon's message of 2012-03-02 16:10:01 -0500:
> 
> On Mar 2, 2012, at 9:17 AM, Jay Pipes wrote:
> 
> > On 03/02/2012 05:34 AM, Day, Phil wrote:
> >> In our experience (running clusters of several hundred nodes) the DB 
> >> performance is not generally the significant factor, so making its calls 
> >> non-blocking  gives only a very small increase in processing capacity and 
> >> creates other side effects in terms of slowing all eventlets down as they 
> >> wait for their turn to run.
> > 
> > Yes, I believe I said that this was the case at the last design summit -- 
> > or rather, I believe I said "is there any evidence that the database is a 
> > performance or scalability problem at all"?
> > 
> >> That shouldn't really be surprising given that the Nova DB is pretty small 
> >> and MySQL is a pretty good DB - throw reasonable hardware at the DB server 
> >> and give it a bit of TLC from a DBA (remove deleted entries from the DB, 
> >> add indexes where the slow query log tells you to, etc) and it shouldn't 
> >> be the bottleneck in the system for performance or scalability.
> > 
> > ++
> > 
> >> We use the python driver and have experimented with allowing the eventlet 
> >> code to make the db calls non-blocking (its not the default setting), and 
> >> it works, but didn't give us any significant advantage.
> > 
> > Yep, identical results to the work that Mark Washenberger did on the same 
> > subject.
> > 
> 
> Has anyone thought about switching to gevent?   It's similar enough to 
> eventlet that the port shouldn't be too bad, and because it's event loop is 
> in C, (libevent), there are C mysql drivers (ultramysql) that will work with 
> it without blocking.   


Switching to gevent won't fix the structural problems with the codebase, that 
nescitated sleeps for greenlet switching. A refactoring to an architecture more 
amenable to decomposing api requests into discrete tasks executed that are 
yieldable would help. incidentally, ultramysql is not dbapi compliant, and 
won't 
work with sqlalchemy.

-kapil


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] how to connect to private network with cloudpipe?

2012-03-02 Thread .。o 0 O泡泡
I am using VLANManager mode for nova-network,it seems I need cloudpipe to 
connect to the instances from internet..

I try to deploy cloudpipe with http://nova.openstack.org/devref/cloudpipe.html 
but I don't understand all the step of it;

In the step of Create a Cloudpipe Image, I don't really know 
"wget http://169.254.169.254/latest/user-data -O /tmp/payload.b64" how can I 
get It..because it response 404 not found..

does it because I have install horizon??

do anyone get your cloud connected from internet with VLANManager and can you 
please tell me what to do with it..

thanks very much!!!___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp