[Openstack] problem with uploading virtual image

2012-03-02 Thread Mahdi Njim
Hi
I want to upload an image to openstack, but even after I set the proper
credentials, when I use uec-publish-tarball I this error message:
cert must be specified.
private key must be specified.
user must be specified.
ec2cert must be specified.
when I tape euca-describe-image or nova list it works just fine.
Any Ideas??
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex-4 horizon error

2012-03-02 Thread Thierry Carrez
darkfower wrote:
 i get horizon(essex-4) software package from 
 https://launchpad.net/horizon/essex/essex-4/+download/horizon-2012.1%7Ee4.tar.gz,
  
 but i tar xf horizon compress package after, when i exec python setup.py
 install emerge error , as follows:
 
 Traceback (most recent call last):
   File setup.py, line 35, in module
 long_description=read('README.rst'),
   File setup.py, line 27, in read
 return open(os.path.join(os.path.dirname(__file__), fname)).read()
 IOError: [Errno 2] No such file or directory: 'README.rst'

Yes, apparently the Horizon E4 tarball is very incomplete.

Unfortunately, nobody seems to actually test the milestone-proposed
tarballs, so this kind of issue is only detected after publication. The
fact that the directory structure was silently changed just before E4
doesn't really help either.

Follow https://bugs.launchpad.net/horizon/+bug/944763 for progress on
this issue.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Armando Migliaccio


 -Original Message-
 From: openstack-bounces+armando.migliaccio=eu.citrix@lists.launchpad.net
 [mailto:openstack-
 bounces+armando.migliaccio=eu.citrix@lists.launchpad.net] On Behalf Of Jay
 Pipes
 Sent: 02 March 2012 15:17
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] eventlet weirdness
 
 On 03/02/2012 05:34 AM, Day, Phil wrote:
  In our experience (running clusters of several hundred nodes) the DB
 performance is not generally the significant factor, so making its calls non-
 blocking  gives only a very small increase in processing capacity and creates
 other side effects in terms of slowing all eventlets down as they wait for
 their turn to run.
 
 Yes, I believe I said that this was the case at the last design summit
 -- or rather, I believe I said is there any evidence that the database is a
 performance or scalability problem at all?
 
  That shouldn't really be surprising given that the Nova DB is pretty small
 and MySQL is a pretty good DB - throw reasonable hardware at the DB server and
 give it a bit of TLC from a DBA (remove deleted entries from the DB, add
 indexes where the slow query log tells you to, etc) and it shouldn't be the
 bottleneck in the system for performance or scalability.
 
 ++
 
  We use the python driver and have experimented with allowing the eventlet
 code to make the db calls non-blocking (its not the default setting), and it
 works, but didn't give us any significant advantage.
 
 Yep, identical results to the work that Mark Washenberger did on the same
 subject.
 
  For example in the API server (before we made it properly
  multi-threaded)
 
 By properly multi-threaded are you instead referring to making the nova-api
 server multi-*processed* with eventlet greenthread pools in each process? i.e.
 The way Swift (and now Glance) works? Or are you referring to a different
 approach entirely?
 
   with blocking db calls the server was essentially a serial processing queue
 - each request was fully processed before the next.  With non-blocking db
 calls we got a lot more apparent concurrencybut only at the expense of making
 all of the requests equally bad.
 
 Yep, not surprising.
 
  Consider a request takes 10 seconds, where after 5 seconds there is a call
 to the DB which takes 1 second, and three are started at the same time:
 
  Blocking:
  0 - Request 1 starts
  10 - Request 1 completes, request 2 starts
  20 - Request 2 completes, request 3 starts
  30 - Request 3 competes
  Request 1 completes in 10 seconds
  Request 2 completes in 20 seconds
  Request 3 completes in 30 seconds
  Ave time: 20 sec
 
  Non-blocking
  0 - Request 1 Starts
  5 - Request 1 gets to db call, request 2 starts
  10 - Request 2 gets to db call, request 3 starts
  15 - Request 3 gets to db call, request 1 resumes
  19 - Request 1 completes, request 2 resumes
  23 - Request 2 completes,  request 3 resumes
  27 - Request 3 completes
 
  Request 1 completes in 19 seconds  (+ 9 seconds) Request 2 completes
  in 24 seconds (+ 4 seconds) Request 3 completes in 27 seconds (- 3
  seconds) Ave time: 20 sec
 
  So instead of worrying about making db calls non-blocking we've been working
 to make certain eventlets non-blocking - i.e. add sleep(0) calls to long
 running iteration loops - which IMO has a much bigger impact on the
 performance of the apparent latency of the system.
 
 Yep, and I think adding a few sleep(0) calls in various places in the Nova
 codebase (as was recently added in the _sync_power_states() periodic task) is
 an easy and simple win with pretty much no ill side-effects. :)

I'd be cautious to say that no ill side-effects were introduced. I found a race 
condition right in the middle of sync_power_states, which I assume was exposed 
by breaking the task deliberately. 

 
 Curious... do you have a list of all the places where sleep(0) calls were
 inserted in the HP Nova code? I can turn that into a bug report and get to
 work on adding them...
 
 All the best,
 -jay
 
  Phil
 
 
 
  -Original Message-
  From: openstack-bounces+philip.day=hp@lists.launchpad.net
  [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On
  Behalf Of Brian Lamar
  Sent: 01 March 2012 21:31
  To: openstack@lists.launchpad.net
  Subject: Re: [Openstack] eventlet weirdness
 
  How is MySQL access handled in eventlet? Presumably it's external C
  library so it's not going to be monkey patched. Does that make every
  db access call a blocking call? Thanks,
 
  Nope, it goes through a thread pool.
 
  I feel like this might be an over-simplification. If the question is:
 
  How is MySQL access handled in nova?
 
  The answer would be that we use SQLAlchemy which can load any number of SQL-
 drivers. These drivers can be either pure Python or C-based drivers. In the
 case of pure Python drivers, monkey patching can occur and db calls are non-
 blocking. In the case of drivers which contain C code (or perhaps other
 blocking calls), db calls will most 

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Day, Phil
 By properly multi-threaded are you instead referring to making the nova-api 
 server multi-*processed* with eventlet greenthread pools in each process? 
 i.e. The way Swift (and now Glance) works? Or are you referring to a 
 different approach entirely?

Yep - following your posting in here pointing to the glance changes we 
back-ported that into the Diablo API server.   We're now running each API 
server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
happier.  The same changes were being done in parallel into Essex by someone in 
the community I thought ?

 Curious... do you have a list of all the places where sleep(0) calls were 
 inserted in the HP Nova code? I can turn that into a bug report and get to 
 work on adding them... 

So far the only two cases we've done this are in the _sync_power_state and  in 
the security group refresh handling 
(libvirt/firewall/do_refresh_security_group_rules) - which we modified to only 
refresh for instances in the group and added a sleep in the loop (I need to 
finish writing the bug report for this one).

I have contemplated doing something similar in the image code when reading 
chunks from glance - but am slightly worried that in this case the only thing 
that currently stops two creates for the same image from making separate 
requests to glance might be that one gets queued behind the other.  It would be 
nice to do the same thing on snapshot (as this can also be a real hog), but 
there the transfer is handled completely within the glance client.   A more 
radical approach would be to split out the image handling code from compute 
manager into a separate (co-hosted) image_manager so at least only commands 
which need interaction with glance will block each other.

Phil




-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Jay Pipes
Sent: 02 March 2012 15:17
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

On 03/02/2012 05:34 AM, Day, Phil wrote:
 In our experience (running clusters of several hundred nodes) the DB 
 performance is not generally the significant factor, so making its calls 
 non-blocking  gives only a very small increase in processing capacity and 
 creates other side effects in terms of slowing all eventlets down as they 
 wait for their turn to run.

Yes, I believe I said that this was the case at the last design summit
-- or rather, I believe I said is there any evidence that the database is a 
performance or scalability problem at all?

 That shouldn't really be surprising given that the Nova DB is pretty small 
 and MySQL is a pretty good DB - throw reasonable hardware at the DB server 
 and give it a bit of TLC from a DBA (remove deleted entries from the DB, add 
 indexes where the slow query log tells you to, etc) and it shouldn't be the 
 bottleneck in the system for performance or scalability.

++

 We use the python driver and have experimented with allowing the eventlet 
 code to make the db calls non-blocking (its not the default setting), and it 
 works, but didn't give us any significant advantage.

Yep, identical results to the work that Mark Washenberger did on the same 
subject.

 For example in the API server (before we made it properly 
 multi-threaded)

By properly multi-threaded are you instead referring to making the nova-api 
server multi-*processed* with eventlet greenthread pools in each process? i.e. 
The way Swift (and now Glance) works? Or are you referring to a different 
approach entirely?

  with blocking db calls the server was essentially a serial processing queue 
  - each request was fully processed before the next.  With non-blocking db 
  calls we got a lot more apparent concurrencybut only at the expense of 
  making all of the requests equally bad.

Yep, not surprising.

 Consider a request takes 10 seconds, where after 5 seconds there is a call to 
 the DB which takes 1 second, and three are started at the same time:

 Blocking:
 0 - Request 1 starts
 10 - Request 1 completes, request 2 starts
 20 - Request 2 completes, request 3 starts
 30 - Request 3 competes
 Request 1 completes in 10 seconds
 Request 2 completes in 20 seconds
 Request 3 completes in 30 seconds
 Ave time: 20 sec

 Non-blocking
 0 - Request 1 Starts
 5 - Request 1 gets to db call, request 2 starts
 10 - Request 2 gets to db call, request 3 starts
 15 - Request 3 gets to db call, request 1 resumes
 19 - Request 1 completes, request 2 resumes
 23 - Request 2 completes,  request 3 resumes
 27 - Request 3 completes

 Request 1 completes in 19 seconds  (+ 9 seconds) Request 2 completes 
 in 24 seconds (+ 4 seconds) Request 3 completes in 27 seconds (- 3 
 seconds) Ave time: 20 sec

 So instead of worrying about making db calls non-blocking we've been working 
 to make certain eventlets non-blocking - i.e. add sleep(0) calls to long 
 running iteration loops - which IMO 

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 10:52 AM, Armando Migliaccio wrote:

I'd be cautious to say that no ill side-effects were introduced. I found a race condition 
right in the middle of sync_power_states, which I assume was exposed by 
breaking the task deliberately.


Such a party-pooper! ;)

Got a link to the bug report for me?

Thanks!
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] RHEL 5 6 OpenStack image archive...

2012-03-02 Thread J. Marc Edwards
Can someone tell me where these base images are located for use on my
OpenStack deployment?
Kind regards, Marc
-- 

J. Marc Edwards
Lead Architect - Semiconductor Design Portals
Nimbis Services, Inc.
Skype: (919) 747-3775
Cell:  (919) 345-1021
Fax:   (919) 882-8602
marc.edwa...@nimbisservices.com
www.nimbisservices.com

attachment: marc_edwards.vcf___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Armando Migliaccio
I knew you'd say that :P

There you go: https://bugs.launchpad.net/nova/+bug/944145

Cheers,
Armando

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 02 March 2012 16:22
 To: Armando Migliaccio
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] eventlet weirdness
 
 On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
  I'd be cautious to say that no ill side-effects were introduced. I found a
 race condition right in the middle of sync_power_states, which I assume was
 exposed by breaking the task deliberately.
 
 Such a party-pooper! ;)
 
 Got a link to the bug report for me?
 
 Thanks!
 -jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 6 OpenStack image archive...

2012-03-02 Thread Edgar Magana (eperdomo)
Hi Marc,

 

I ended up creating my own RHEL 6.1 image. If you want I can share it
with you.

 

Thanks,

 

Edgar Magana

CTO Cloud Computing

 

From: openstack-bounces+eperdomo=cisco@lists.launchpad.net
[mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On
Behalf Of J. Marc Edwards
Sent: Friday, March 02, 2012 8:23 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] RHEL 5  6 OpenStack image archive...

 

Can someone tell me where these base images are located for use on my
OpenStack deployment?
Kind regards, Marc

-- 



J. Marc Edwards
Lead Architect - Semiconductor Design Portals
Nimbis Services, Inc.
Skype: (919) 747-3775
Cell:  (919) 345-1021
Fax:   (919) 882-8602
marc.edwa...@nimbisservices.com
www.nimbisservices.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Test Dependencies

2012-03-02 Thread Jay Pipes

On 03/01/2012 08:12 PM, Maru Newby wrote:

Is there any interest in adding unittest2 to the test dependencies for 
openstack projects?  I have found its enhanced assertions and 'with 
self.assertRaises' very useful in writing tests.  I see there have been past 
bugs that mentioned unittest2, and am wondering if the reasons for not adopting 
it still stand.


++ to unittest2. Frankly, it's a dependency of sqlalchemy, so it gets 
installed anyway during any installation. Might as well use it IMHO.



Separately, is the use of mox open to discussion?  mock was recently added as a 
dependency to quantum to perform library patching, which isn't supported by mox 
as far as I know.


This is incorrect. pymox' stubout module can be used to perform library 
patching. You can see examples of this in Glance and Nova. For example:


https://github.com/openstack/glance/blob/master/glance/tests/unit/test_swift_store.py#L55

 The ability to do non-replay mocking is another useful feature of 
mock.  I'm not suggesting that mox be replaced, but am wondering if mock 
could be an additional dependency and used when the functionality 
provided by mox falls short.


Might be easier to answer with some example code... would you mind 
pastebin'ing an example that shows what mock can do that mox can't?


Thanks much!
-jay


Thanks,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Lorin Hochstein
Looks like a textbook example of a leaky abstraction 
http://www.joelonsoftware.com/articles/LeakyAbstractions.html to me.

Take care,

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com


On Mar 2, 2012, at 1:35 PM, Joshua Harlow wrote:

 Does anyone else feel that the following seems really “dirty”, or is it just 
 me.
 
 “adding a few sleep(0) calls in various places in the
 Nova codebase (as was recently added in the _sync_power_states()
 periodic task) is an easy and simple win with pretty much no ill
 side-effects. :)”
 
 Dirty in that it feels like there is something wrong from a design point of 
 view.
 Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho. 
 But that’s just my gut feeling.
 
 :-(
 
 On 3/2/12 8:26 AM, Armando Migliaccio armando.migliac...@eu.citrix.com 
 wrote:
 
 I knew you'd say that :P
 
 There you go: https://bugs.launchpad.net/nova/+bug/944145
 
 Cheers,
 Armando
 
  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 02 March 2012 16:22
  To: Armando Migliaccio
  Cc: openstack@lists.launchpad.net
  Subject: Re: [Openstack] eventlet weirdness
 
  On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
   I'd be cautious to say that no ill side-effects were introduced. I found a
  race condition right in the middle of sync_power_states, which I assume was
  exposed by breaking the task deliberately.
 
  Such a party-pooper! ;)
 
  Got a link to the bug report for me?
 
  Thanks!
  -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Chris Behrens
It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

 Does anyone else feel that the following seems really “dirty”, or is it just 
 me.
 
 “adding a few sleep(0) calls in various places in the
 Nova codebase (as was recently added in the _sync_power_states()
 periodic task) is an easy and simple win with pretty much no ill
 side-effects. :)”
 
 Dirty in that it feels like there is something wrong from a design point of 
 view.
 Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho. 
 But that’s just my gut feeling.
 
 :-(
 
 On 3/2/12 8:26 AM, Armando Migliaccio armando.migliac...@eu.citrix.com 
 wrote:
 
 I knew you'd say that :P
 
 There you go: https://bugs.launchpad.net/nova/+bug/944145
 
 Cheers,
 Armando
 
  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 02 March 2012 16:22
  To: Armando Migliaccio
  Cc: openstack@lists.launchpad.net
  Subject: Re: [Openstack] eventlet weirdness
 
  On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
   I'd be cautious to say that no ill side-effects were introduced. I found a
  race condition right in the middle of sync_power_states, which I assume was
  exposed by breaking the task deliberately.
 
  Such a party-pooper! ;)
 
  Got a link to the bug report for me?
 
  Thanks!
  -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Armando Migliaccio


 -Original Message-
 From: Eric Windisch [mailto:e...@cloudscaling.com]
 Sent: 02 March 2012 19:04
 To: Joshua Harlow
 Cc: Armando Migliaccio; Jay Pipes; openstack
 Subject: Re: [Openstack] eventlet weirdness
 
 The problem is that unless you sleep(0), eventlet only switches context when
 you hit a file descriptor.
 
 As long as python coroutines are used, we should put sleep(0) where-ever it is
 expected that there will be a long-running loop where file descriptors are not
 touched. As noted elsewhere in this thread, MySQL file descriptors don't
 count, they're not coroutine friendly.
 
 The premise is that cpus are pretty fast and get quickly from one call of a
 file descriptor to another, that the blocking of these descriptors is what a
 CPU most waits on, and this is an easy and obvious place to switch coroutines
 via monkey-patching.
 
 That said, it shouldn't be necessary to sprinkle sleep(0) calls. They should
 be strategically placed, as necessary.

I agree, but then the whole assumption of adopting eventlet to simplify the 
programming model is hindered by the fact that one has to think harder to what 
is doing...Nova could've kept Twisted for that matter. The programming model 
would have been harder, but at least it would have been cleaner and free from 
icky patching (that's my own opinion anyway).

 
 race-conditions around coroutine switching sounds more like thread-safety
 issues...
 

Yes. There is a fine balance to be struck here: do you let potential races 
appear in your system and deal with them on a case-by-case base, or do you 
introduce mutexes and deal with potential inefficiency and/or deadlocks? I'd 
rather go with the former here.

 --
 Eric Windisch
 
 
 On Friday, March 2, 2012 at 1:35 PM, Joshua Harlow wrote:
 
  Re: [Openstack] eventlet weirdness Does anyone else feel that the following
 seems really “dirty”, or is it just me.
 
  “adding a few sleep(0) calls in various places in the Nova codebase
  (as was recently added in the _sync_power_states() periodic task) is
  an easy and simple win with pretty much no ill side-effects. :)”
 
  Dirty in that it feels like there is something wrong from a design point of
 view.
  Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho.
  But that’s just my gut feeling.
 
  :-(
 
  On 3/2/12 8:26 AM, Armando Migliaccio armando.migliac...@eu.citrix.com
 wrote:
 
   I knew you'd say that :P
  
   There you go: https://bugs.launchpad.net/nova/+bug/944145
  
   Cheers,
   Armando
  
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 02 March 2012 16:22
To: Armando Migliaccio
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness
   
On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
 I'd be cautious to say that no ill side-effects were introduced.
 I found a
   
race condition right in the middle of sync_power_states, which I
assume was exposed by breaking the task deliberately.
   
Such a party-pooper! ;)
   
Got a link to the bug report for me?
   
Thanks!
-jay
  
  
   ___
   Mailing list: https://launchpad.net/~openstack Post to :
   openstack@lists.launchpad.net Unsubscribe :
   https://launchpad.net/~openstack More help :
   https://help.launchpad.net/ListHelp
 
  ___
  Mailing list: https://launchpad.net/~openstack Post to :
  openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack More help :
  https://help.launchpad.net/ListHelp
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Joshua Harlow
So a thought I had was that say if the design of a component forces as part of 
its design the ability to be ran with threads or with eventlet or with 
processes.

Say if u break everything up into tasks (where a task would produce some 
output/result/side-effect).
A set of tasks could complete some action (ie, create a vm).
Subtasks could be the following:
0. Validate credentials
1. Get the image
2. Call into libvirt
3. ...

These tasks, if constructed in a way that makes them stateless, and then 
could be chained together to form an action, then that action could be given 
say to a threaded engine that would know how to execute those tasks with 
threads, or it could be given to an eventlet engine that would do the same 
with evenlet pool/greenthreads/coroutings, or with processes (and so on). This 
could be one way the design of your code abstracts that kind of execution 
(where eventlet is abstracted away from the actual work being done, instead of 
popping up in calls to sleep(0), ie the leaky abstraction).

On 3/2/12 11:08 AM, Day, Phil philip@hp.com wrote:

I didn't say it was pretty - Given the choice I'd much rather have a threading 
model that really did concurrency and pre-emption all the right places, and it 
would be really cool if something managed the threads that were started so that 
is a second conflicting request was received it did some proper tidy up or 
blocking rather than just leaving the race condition to work itself out (then 
we wouldn't have to try and control it by checking vm_state).

However ...   In the current code base where we only have user space based 
eventlets, with no pre-emption, and some activities that need to be prioritised 
then forcing pre-emption with a sleep(0) seems a pretty small bit of untidy.   
And it works now without a major code refactor.

Always open to other approaches ...

Phil


-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Chris Behrens
Sent: 02 March 2012 19:00
To: Joshua Harlow
Cc: openstack; Chris Behrens
Subject: Re: [Openstack] eventlet weirdness

It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

 Does anyone else feel that the following seems really dirty, or is it just 
 me.

 adding a few sleep(0) calls in various places in the Nova codebase
 (as was recently added in the _sync_power_states() periodic task) is
 an easy and simple win with pretty much no ill side-effects. :)

 Dirty in that it feels like there is something wrong from a design point of 
 view.
 Sprinkling sleep(0) seems like its a band-aid on a larger problem imho.
 But that's just my gut feeling.

 :-(

 On 3/2/12 8:26 AM, Armando Migliaccio armando.migliac...@eu.citrix.com 
 wrote:

 I knew you'd say that :P

 There you go: https://bugs.launchpad.net/nova/+bug/944145

 Cheers,
 Armando

  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 02 March 2012 16:22
  To: Armando Migliaccio
  Cc: openstack@lists.launchpad.net
  Subject: Re: [Openstack] eventlet weirdness
 
  On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
   I'd be cautious to say that no ill side-effects were introduced. I
   found a
  race condition right in the middle of sync_power_states, which I
  assume was exposed by breaking the task deliberately.
 
  Such a party-pooper! ;)
 
  Got a link to the bug report for me?
 
  Thanks!
  -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Vishvananda Ishaya

On Mar 2, 2012, at 7:54 AM, Day, Phil wrote:

 By properly multi-threaded are you instead referring to making the 
 nova-api server multi-*processed* with eventlet greenthread pools in each 
 process? i.e. The way Swift (and now Glance) works? Or are you referring to 
 a different approach entirely?
 
 Yep - following your posting in here pointing to the glance changes we 
 back-ported that into the Diablo API server.   We're now running each API 
 server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
 happier.  The same changes were being done in parallel into Essex by someone 
 in the community I thought ?

Can you or jay write up what this would entail in nova?  (or even ship a diff) 
Are you using multiprocessing? In general we have had issues combining 
multiprocessing and eventlet, so in our deploys we run multiple api servers on 
different ports and load balance with ha proxy. It sounds like what you have is 
working though, so it would be nice to put it in (perhaps with a flag gate) if 
possible.
 
 Curious... do you have a list of all the places where sleep(0) calls were 
 inserted in the HP Nova code? I can turn that into a bug report and get to 
 work on adding them... 
 
 So far the only two cases we've done this are in the _sync_power_state and  
 in the security group refresh handling 
 (libvirt/firewall/do_refresh_security_group_rules) - which we modified to 
 only refresh for instances in the group and added a sleep in the loop (I need 
 to finish writing the bug report for this one).

Please do this ASAP, I would like to get that fix in.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Day, Phil
That sounds a bit over complicated to me - Having a string of tasks sounds like 
you still have to think about what the concurrency is within each step.

There is already a good abstraction around the context of each operation - they 
just (I know - big just) need to be running in something that maps to kernel 
threads rather than user space ones.

All I really want to is to allow more than one action to run at the same time.  
So if I have two requests to create a snapshot, why can't they both run at the 
same time and still allow other things to happen ? I have all these cores 
sitting in my compute node that there that could be used, but I'm still having 
to think like a punch-card programmer submitting batch jobs to the mainframe ;-)

Right now creating snapshots is pretty close to a DoS attack on a compute node.


From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
Sent: 02 March 2012 19:23
To: Day, Phil; Chris Behrens
Cc: openstack
Subject: Re: [Openstack] eventlet weirdness

So a thought I had was that say if the design of a component forces as part of 
its design the ability to be ran with threads or with eventlet or with 
processes.

Say if u break everything up into tasks (where a task would produce some 
output/result/side-effect).
A set of tasks could complete some action (ie, create a vm).
Subtasks could be the following:
0. Validate credentials
1. Get the image
2. Call into libvirt
3. ...

These tasks, if constructed in a way that makes them stateless, and then 
could be chained together to form an action, then that action could be given 
say to a threaded engine that would know how to execute those tasks with 
threads, or it could be given to an eventlet engine that would do the same 
with evenlet pool/greenthreads/coroutings, or with processes (and so on). This 
could be one way the design of your code abstracts that kind of execution 
(where eventlet is abstracted away from the actual work being done, instead of 
popping up in calls to sleep(0), ie the leaky abstraction).

On 3/2/12 11:08 AM, Day, Phil philip@hp.com wrote:
I didn't say it was pretty - Given the choice I'd much rather have a threading 
model that really did concurrency and pre-emption all the right places, and it 
would be really cool if something managed the threads that were started so that 
is a second conflicting request was received it did some proper tidy up or 
blocking rather than just leaving the race condition to work itself out (then 
we wouldn't have to try and control it by checking vm_state).

However ...   In the current code base where we only have user space based 
eventlets, with no pre-emption, and some activities that need to be prioritised 
then forcing pre-emption with a sleep(0) seems a pretty small bit of untidy.   
And it works now without a major code refactor.

Always open to other approaches ...

Phil


-Original Message-
From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Chris Behrens
Sent: 02 March 2012 19:00
To: Joshua Harlow
Cc: openstack; Chris Behrens
Subject: Re: [Openstack] eventlet weirdness

It's not just you


On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

 Does anyone else feel that the following seems really dirty, or is it just 
 me.

 adding a few sleep(0) calls in various places in the Nova codebase
 (as was recently added in the _sync_power_states() periodic task) is
 an easy and simple win with pretty much no ill side-effects. :)

 Dirty in that it feels like there is something wrong from a design point of 
 view.
 Sprinkling sleep(0) seems like its a band-aid on a larger problem imho.
 But that's just my gut feeling.

 :-(

 On 3/2/12 8:26 AM, Armando Migliaccio armando.migliac...@eu.citrix.com 
 wrote:

 I knew you'd say that :P

 There you go: https://bugs.launchpad.net/nova/+bug/944145

 Cheers,
 Armando

  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 02 March 2012 16:22
  To: Armando Migliaccio
  Cc: openstack@lists.launchpad.net
  Subject: Re: [Openstack] eventlet weirdness
 
  On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
   I'd be cautious to say that no ill side-effects were introduced. I
   found a
  race condition right in the middle of sync_power_states, which I
  assume was exposed by breaking the task deliberately.
 
  Such a party-pooper! ;)
 
  Got a link to the bug report for me?
 
  Thanks!
  -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



Re: [Openstack] eventlet weirdness

2012-03-02 Thread Johannes Erdfelt
On Fri, Mar 02, 2012, Armando Migliaccio armando.migliac...@eu.citrix.com 
wrote:
 I agree, but then the whole assumption of adopting eventlet to simplify
 the programming model is hindered by the fact that one has to think
 harder to what is doing...Nova could've kept Twisted for that matter.
 The programming model would have been harder, but at least it would
 have been cleaner and free from icky patching (that's my own opinion
 anyway).

Twisted has a much harder programming model with the same blocking
problem that eventlet has.

 Yes. There is a fine balance to be struck here: do you let potential
 races appear in your system and deal with them on a case-by-case base,
 or do you introduce mutexes and deal with potential inefficiency
 and/or deadlocks? I'd rather go with the former here.

Neither of these options are acceptable IMO.

If we want to minimize the number of bugs, we should make the task as
easy as possible on the programmer. Constantly trying to track
multiple threads of execution and what possible races that can happen
and what locking is required will end up with more bugs in the long run.

I'd priortize correct over performant. It's easier to optimize when
you're sure the code is correct than the other way around.

I'd like to see a move towards more serialization of actions. For
instance, if all operations on an instance are serialized, then there
are no opportunities to race against other operations on the same
instance.

We can loosen the restrictions when we've identified bottlenecks and
we're sure it's safe to do so.

I'm sure we'll find out that performance is still very good.

JE


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Duncan McGreggor
On Fri, Mar 2, 2012 at 2:40 PM, Johannes Erdfelt johan...@erdfelt.com wrote:
 On Fri, Mar 02, 2012, Armando Migliaccio armando.migliac...@eu.citrix.com 
 wrote:
 I agree, but then the whole assumption of adopting eventlet to simplify
 the programming model is hindered by the fact that one has to think
 harder to what is doing...Nova could've kept Twisted for that matter.
 The programming model would have been harder, but at least it would
 have been cleaner and free from icky patching (that's my own opinion
 anyway).

 Twisted has a much harder programming model with the same blocking
 problem that eventlet has.

Like so many things that are aesthetic in nature, the statement above
is misleading. Using a callback, event-based, deferred/promise
oriented system is hard for *some*. It is far, far easier for others
(myself included).

It's a matter of perception and personal preference.

It may be apropos to mention that Guido van Rossum himself has stated
that he shares the same view of concurrent programming in Python as
Glyph (the founder of Twisted):
  https://plus.google.com/115212051037621986145/posts/a9SqS7faVWC

Glyph's post, if you can't see that G+ link:
  
http://glyph.twistedmatrix.com/2012/01/concurrency-spectrum-from-callbacks-to.html

One thing to keep in mind is that with Twisted, you always have the
option of deferring to a thread for operations are not async-friendly.

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Essex-3 : Nova api calls with keystone doubt

2012-03-02 Thread Alejandro Comisario

Hi openstack list.

Sorry to ask this, but i have a strong doubt on how the endpoint 
config in keystone actually works when you make a nova api call (we are 
using Essex-3)


First, let me setup a use case :

user1 - tenant1 - zone1 (private nova endpoint)
user2 - tenant2 - zone2 (private nova endpoint)

So, we know that python-novaclient actually checks for a nova to 
exists in order to make a request, but what about nova api call directly 
? ( curl for example )
We realized that if we use the tenant1 token to query or create 
instances on zone2 is possible, and with tenant2, is possible to query 
or create instances on zone1.
And still, tenant1 token, can query and create instances over tenant2 id 
on the resource v1.1/TENANT_ID/server


So, if there is any, is there a way to configure keystone / nova to 
actually do, what python nova-client does regarding the sanity check 
whether there is a nova endpoint asociated with the tenant when 
curling the nova-api port ?
Second, how can we prevent for token from tenant1 to access resources of 
tenant2 ?


Best regards.
alejandro.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Yun Mao
Hi Phil, I'm a little confused. To what extend does sleep(0) help?

It only gives the greenlet scheduler a chance to switch to another
green thread. If we are having a CPU bound issue, sleep(0) won't give
us access to any more CPU cores. So the total time to finish should be
the same no matter what. It may improve the fairness among different
green threads but shouldn't help the throughput. I think the only
apparent gain to me is situation such that there is 1 green thread
with long CPU time and many other green threads with small CPU time.
The total finish time will be the same with or without sleep(0), but
with sleep in the first threads, the others should be much more
responsive.

However, it's unclear to me which part of Nova is very CPU intensive.
It seems that most work here is IO bound, including the snapshot. Do
we have other blocking calls besides mysql access? I feel like I'm
missing something but couldn't figure out what.

Thanks,

Yun


On Fri, Mar 2, 2012 at 2:08 PM, Day, Phil philip@hp.com wrote:
 I didn't say it was pretty - Given the choice I'd much rather have a 
 threading model that really did concurrency and pre-emption all the right 
 places, and it would be really cool if something managed the threads that 
 were started so that is a second conflicting request was received it did some 
 proper tidy up or blocking rather than just leaving the race condition to 
 work itself out (then we wouldn't have to try and control it by checking 
 vm_state).

 However ...   In the current code base where we only have user space based 
 eventlets, with no pre-emption, and some activities that need to be 
 prioritised then forcing pre-emption with a sleep(0) seems a pretty small bit 
 of untidy.   And it works now without a major code refactor.

 Always open to other approaches ...

 Phil


 -Original Message-
 From: openstack-bounces+philip.day=hp@lists.launchpad.net 
 [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
 Chris Behrens
 Sent: 02 March 2012 19:00
 To: Joshua Harlow
 Cc: openstack; Chris Behrens
 Subject: Re: [Openstack] eventlet weirdness

 It's not just you


 On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

 Does anyone else feel that the following seems really dirty, or is it just 
 me.

 adding a few sleep(0) calls in various places in the Nova codebase
 (as was recently added in the _sync_power_states() periodic task) is
 an easy and simple win with pretty much no ill side-effects. :)

 Dirty in that it feels like there is something wrong from a design point of 
 view.
 Sprinkling sleep(0) seems like its a band-aid on a larger problem imho.
 But that's just my gut feeling.

 :-(

 On 3/2/12 8:26 AM, Armando Migliaccio armando.migliac...@eu.citrix.com 
 wrote:

 I knew you'd say that :P

 There you go: https://bugs.launchpad.net/nova/+bug/944145

 Cheers,
 Armando

  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 02 March 2012 16:22
  To: Armando Migliaccio
  Cc: openstack@lists.launchpad.net
  Subject: Re: [Openstack] eventlet weirdness
 
  On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
   I'd be cautious to say that no ill side-effects were introduced. I
   found a
  race condition right in the middle of sync_power_states, which I
  assume was exposed by breaking the task deliberately.
 
  Such a party-pooper! ;)
 
  Got a link to the bug report for me?
 
  Thanks!
  -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Duncan McGreggor
On Fri, Mar 2, 2012 at 3:38 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
 Duncan McGregor wrote:

Like so many things that are aesthetic in nature, the statement above is 
misleading. Using a callback, event-based, deferred/promise oriented system 
is hard for *some*. It is far, far easier for others (myself included).

It's a matter of perception and personal preference.

 I would also agree that coding your application as a series of responses to 
 events can produce code that is easier to understand and debug.
 And that would be a wonderful discussion if we were starting a new project.

 But I hope that nobody is suggesting that we rewrite all of OpenStack code 
 away from eventlet pseudo-threading after the fact.
 Personally I think it was the wrong decision, but that ship has already 
 sailed.

Agreed.

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 02:27 PM, Vishvananda Ishaya wrote:


On Mar 2, 2012, at 7:54 AM, Day, Phil wrote:


By properly multi-threaded are you instead referring to making the nova-api 
server multi-*processed* with eventlet greenthread pools in each process? i.e. The way 
Swift (and now Glance) works? Or are you referring to a different approach entirely?


Yep - following your posting in here pointing to the glance changes we 
back-ported that into the Diablo API server.   We're now running each API 
server with 20 OS processes and 20 EC2 processes, and the world looks a lot 
happier.  The same changes were being done in parallel into Essex by someone in 
the community I thought ?


Can you or jay write up what this would entail in nova?  (or even ship a diff) 
Are you using multiprocessing? In general we have had issues combining 
multiprocessing and eventlet, so in our deploys we run multiple api servers on 
different ports and load balance with ha proxy. It sounds like what you have is 
working though, so it would be nice to put it in (perhaps with a flag gate) if 
possible.


We are not using multiprocessing, no.

We simply start multiple worker processes listening on the same socket, 
with each worker process having an eventlet greenthread pool.


You can see the code (taken from Swift and adapted by Chris Behrens and 
Brian Waldon to use the object-oriented Server approach that 
Glance/Keystone/Nova uses) here:


https://github.com/openstack/glance/blob/master/glance/common/wsgi.py

There is a worker = XXX configuration option that controls the number of 
worker processes created on server startup. A worker value of 0 
indicates to run identically to the way Nova currently runs (one process 
with an eventlet pool of greenthreads)


Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 01:35 PM, Joshua Harlow wrote:

Does anyone else feel that the following seems really “dirty”, or is it
just me.

“adding a few sleep(0) calls in various places in the
Nova codebase (as was recently added in the _sync_power_states()
periodic task) is an easy and simple win with pretty much no ill
side-effects. :)”

Dirty in that it feels like there is something wrong from a design point
of view.
Sprinkling “sleep(0)” seems like its a band-aid on a larger problem imho.
But that’s just my gut feeling.


It's not really all that dirty, IMHO. You just have to think of 
greenlet.sleep(0) as manually yielding control back to eventlet...


Like Phil said, in the absence of a non-userspace threading model and 
thread scheduler, there's not a whole lot else one can do other than be 
mindful of what functions/methods may run for long periods of time 
and/or block I/O and call sleep(0) in those scenarios where it makes 
sense to yield a timeslice back to other processes.


While it's true that eventlet (and to an extent Twisted) mask some of 
the complexities involved in non-blocking I/O in a threaded(-like) 
application programming model, I don't think there will be an 
eventlet-that-knows-what-methods-should-yield-and-which-should-be-prioritized 
library any time soon.


-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 03:38 PM, Caitlin Bestler wrote:

Duncan McGregor wrote:

Like so many things that are aesthetic in nature, the statement above is 
misleading. Using a callback, event-based, deferred/promise oriented system is 
hard for *some*. It is far, far easier forothers (myself included).



It's a matter of perception and personal preference.


I would also agree that coding your application as a series of responses to 
events can produce code that is easier to understand and debug.
And that would be a wonderful discussion if we were starting a new project.

But I hope that nobody is suggesting that we rewrite all of OpenStack code away 
from eventlet pseudo-threading after the fact.
Personally I think it was the wrong decision, but that ship has already sailed.


Yep, that ship has sailed more than 12 months ago.


With event-response coding it is obvious that you have to partition any one 
response into segments that do not take so long to
Execute that they are blocking other events. That remains true when you hide 
your event-driven model with eventlet pseudo-threading.
Inserting sleep(0) calls is the most obvious way to break up an overly event 
handler, given that you've already decided to obfuscate the
Code to pretend that it is a thread.


I assume you meant an overly greedy event handler above?

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 6 OpenStack image archive...

2012-03-02 Thread David Busby
We currently have openstack available in EPEL testing repo, help in testing is 
appreciated.

Sent from my iPhone

On 2 Mar 2012, at 18:13, Edgar Magana (eperdomo) eperd...@cisco.com wrote:

 Hi Marc,
  
 I ended up creating my own RHEL 6.1 image. If you want I can share it with 
 you.
  
 Thanks,
  
 Edgar Magana
 CTO Cloud Computing
  
 From: openstack-bounces+eperdomo=cisco@lists.launchpad.net 
 [mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On Behalf 
 Of J. Marc Edwards
 Sent: Friday, March 02, 2012 8:23 AM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] RHEL 5  6 OpenStack image archive...
  
 Can someone tell me where these base images are located for use on my 
 OpenStack deployment?
 Kind regards, Marc
 -- 
 
 J. Marc Edwards
 Lead Architect - Semiconductor Design Portals
 Nimbis Services, Inc.
 Skype: (919) 747-3775
 Cell:  (919) 345-1021
 Fax:   (919) 882-8602
 marc.edwa...@nimbisservices.com
 www.nimbisservices.com
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Monsyne Dragon

On Mar 2, 2012, at 9:17 AM, Jay Pipes wrote:

 On 03/02/2012 05:34 AM, Day, Phil wrote:
 In our experience (running clusters of several hundred nodes) the DB 
 performance is not generally the significant factor, so making its calls 
 non-blocking  gives only a very small increase in processing capacity and 
 creates other side effects in terms of slowing all eventlets down as they 
 wait for their turn to run.
 
 Yes, I believe I said that this was the case at the last design summit -- or 
 rather, I believe I said is there any evidence that the database is a 
 performance or scalability problem at all?
 
 That shouldn't really be surprising given that the Nova DB is pretty small 
 and MySQL is a pretty good DB - throw reasonable hardware at the DB server 
 and give it a bit of TLC from a DBA (remove deleted entries from the DB, add 
 indexes where the slow query log tells you to, etc) and it shouldn't be the 
 bottleneck in the system for performance or scalability.
 
 ++
 
 We use the python driver and have experimented with allowing the eventlet 
 code to make the db calls non-blocking (its not the default setting), and it 
 works, but didn't give us any significant advantage.
 
 Yep, identical results to the work that Mark Washenberger did on the same 
 subject.
 

Has anyone thought about switching to gevent?   It's similar enough to eventlet 
that the port shouldn't be too bad, and because it's event loop is in C, 
(libevent), there are C mysql drivers (ultramysql) that will work with it 
without blocking.   



 For example in the API server (before we made it properly multi-threaded)
 
 By properly multi-threaded are you instead referring to making the nova-api 
 server multi-*processed* with eventlet greenthread pools in each process? 
 i.e. The way Swift (and now Glance) works? Or are you referring to a 
 different approach entirely?
 
  with blocking db calls the server was essentially a serial processing queue 
  - each request was fully processed before the next.  With non-blocking db 
  calls we got a lot more apparent concurrencybut only at the expense of 
  making all of the requests equally bad.
 
 Yep, not surprising.
 
 Consider a request takes 10 seconds, where after 5 seconds there is a call 
 to the DB which takes 1 second, and three are started at the same time:
 
 Blocking:
 0 - Request 1 starts
 10 - Request 1 completes, request 2 starts
 20 - Request 2 completes, request 3 starts
 30 - Request 3 competes
 Request 1 completes in 10 seconds
 Request 2 completes in 20 seconds
 Request 3 completes in 30 seconds
 Ave time: 20 sec
 
 Non-blocking
 0 - Request 1 Starts
 5 - Request 1 gets to db call, request 2 starts
 10 - Request 2 gets to db call, request 3 starts
 15 - Request 3 gets to db call, request 1 resumes
 19 - Request 1 completes, request 2 resumes
 23 - Request 2 completes,  request 3 resumes
 27 - Request 3 completes
 
 Request 1 completes in 19 seconds  (+ 9 seconds)
 Request 2 completes in 24 seconds (+ 4 seconds)
 Request 3 completes in 27 seconds (- 3 seconds)
 Ave time: 20 sec
 
 So instead of worrying about making db calls non-blocking we've been working 
 to make certain eventlets non-blocking - i.e. add sleep(0) calls to long 
 running iteration loops - which IMO has a much bigger impact on the 
 performance of the apparent latency of the system.
 
 Yep, and I think adding a few sleep(0) calls in various places in the Nova 
 codebase (as was recently added in the _sync_power_states() periodic task) is 
 an easy and simple win with pretty much no ill side-effects. :)
 
 Curious... do you have a list of all the places where sleep(0) calls were 
 inserted in the HP Nova code? I can turn that into a bug report and get to 
 work on adding them...
 
 All the best,
 -jay
 
 Phil
 
 
 
 -Original Message-
 From: openstack-bounces+philip.day=hp@lists.launchpad.net 
 [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf 
 Of Brian Lamar
 Sent: 01 March 2012 21:31
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] eventlet weirdness
 
 How is MySQL access handled in eventlet? Presumably it's external C
 library so it's not going to be monkey patched. Does that make every
 db access call a blocking call? Thanks,
 
 Nope, it goes through a thread pool.
 
 I feel like this might be an over-simplification. If the question is:
 
 How is MySQL access handled in nova?
 
 The answer would be that we use SQLAlchemy which can load any number of 
 SQL-drivers. These drivers can be either pure Python or C-based drivers. In 
 the case of pure Python drivers, monkey patching can occur and db calls are 
 non-blocking. In the case of drivers which contain C code (or perhaps other 
 blocking calls), db calls will most likely be blocking.
 
 If the question is How is MySQL access handled in eventlet? the answer 
 would be to use the eventlet.db_pool module to allow db access using thread 
 pools.
 
 B
 
 -Original Message-
 From: Adam 

Re: [Openstack] eventlet weirdness

2012-03-02 Thread Jay Pipes

On 03/02/2012 04:10 PM, Monsyne Dragon wrote:

Has anyone thought about switching to gevent?   It's similar enough to eventlet 
that the port shouldn't be too bad, and because it's event loop is in C, 
(libevent), there are C mysql drivers (ultramysql) that will work with it 
without blocking.


Yep, I've thought about doing an experimental branch in Glance to see if 
there's a decent performance benefit. Just got stymied by that damn 24 
hour limit in a day :(


Damn ratelimiting.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Joshua Harlow
Why has the ship-sailed?
This is software we are talking about right, there is always a v2 (X-1)
;)

On 3/2/12 12:38 PM, Caitlin Bestler caitlin.best...@nexenta.com wrote:

Duncan McGregor wrote:


Like so many things that are aesthetic in nature, the statement above is 
misleading. Using a callback, event-based, deferred/promise oriented system is 
hard for *some*. It is far, far easier for others (myself included).

It's a matter of perception and personal preference.

I would also agree that coding your application as a series of responses to 
events can produce code that is easier to understand and debug.
And that would be a wonderful discussion if we were starting a new project.

But I hope that nobody is suggesting that we rewrite all of OpenStack code away 
from eventlet pseudo-threading after the fact.
Personally I think it was the wrong decision, but that ship has already sailed.

With event-response coding it is obvious that you have to partition any one 
response into segments that do not take so long to
Execute that they are blocking other events. That remains true when you hide 
your event-driven model with eventlet pseudo-threading.
Inserting sleep(0) calls is the most obvious way to break up an overly event 
handler, given that you've already decided to obfuscate the
Code to pretend that it is a thread.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Duncan McGreggor
On Fri, Mar 2, 2012 at 4:10 PM, Monsyne Dragon mdra...@rackspace.com wrote:

 On Mar 2, 2012, at 9:17 AM, Jay Pipes wrote:

 On 03/02/2012 05:34 AM, Day, Phil wrote:
 In our experience (running clusters of several hundred nodes) the DB 
 performance is not generally the significant factor, so making its calls 
 non-blocking  gives only a very small increase in processing capacity and 
 creates other side effects in terms of slowing all eventlets down as they 
 wait for their turn to run.

 Yes, I believe I said that this was the case at the last design summit -- or 
 rather, I believe I said is there any evidence that the database is a 
 performance or scalability problem at all?

 That shouldn't really be surprising given that the Nova DB is pretty small 
 and MySQL is a pretty good DB - throw reasonable hardware at the DB server 
 and give it a bit of TLC from a DBA (remove deleted entries from the DB, 
 add indexes where the slow query log tells you to, etc) and it shouldn't be 
 the bottleneck in the system for performance or scalability.

 ++

 We use the python driver and have experimented with allowing the eventlet 
 code to make the db calls non-blocking (its not the default setting), and 
 it works, but didn't give us any significant advantage.

 Yep, identical results to the work that Mark Washenberger did on the same 
 subject.


 Has anyone thought about switching to gevent?   It's similar enough to 
 eventlet that the port shouldn't be too bad, and because it's event loop is 
 in C, (libevent), there are C mysql drivers (ultramysql) that will work with 
 it without blocking.

We've been exploring this possibility at DreamHost, and chatted with
some other stackers about it at various meat-space venues. Fwiw, it's
something we'd be very interested in supporting (starting with as much
test coverage as possible of eventlet's current use in OpenStack, to
ensure as pain-free a transition as possible).

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RHEL 5 6 OpenStack image archive...

2012-03-02 Thread John Paul Walters
David,

We're currently in the process of building Essex-3/Essex-4 RPMs locally at 
USC/ISI for our heterogeneous Openstack builds.  When I looked at the EPEL 
testing repo, it looked like the packages that are currently available are 
right around Essex-1.  Are there any plans to update to the more recent 
versions?  Perhaps we could collaborate.

best,
JP



On Mar 2, 2012, at 4:05 PM, David Busby wrote:

 We currently have openstack available in EPEL testing repo, help in testing 
 is appreciated.
 
 Sent from my iPhone
 
 On 2 Mar 2012, at 18:13, Edgar Magana (eperdomo) eperd...@cisco.com wrote:
 
 Hi Marc,
  
 I ended up creating my own RHEL 6.1 image. If you want I can share it with 
 you.
  
 Thanks,
  
 Edgar Magana
 CTO Cloud Computing
  
 From: openstack-bounces+eperdomo=cisco@lists.launchpad.net 
 [mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On Behalf 
 Of J. Marc Edwards
 Sent: Friday, March 02, 2012 8:23 AM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] RHEL 5  6 OpenStack image archive...
  
 Can someone tell me where these base images are located for use on my 
 OpenStack deployment?
 Kind regards, Marc
 -- 
 
 J. Marc Edwards
 Lead Architect - Semiconductor Design Portals
 Nimbis Services, Inc.
 Skype: (919) 747-3775
 Cell:  (919) 345-1021
 Fax:   (919) 882-8602
 marc.edwa...@nimbisservices.com
 www.nimbisservices.com
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Vishvananda Ishaya

On Mar 2, 2012, at 12:50 PM, Jay Pipes wrote:
 
 We are not using multiprocessing, no.
 
 We simply start multiple worker processes listening on the same socket, with 
 each worker process having an eventlet greenthread pool.
 
 You can see the code (taken from Swift and adapted by Chris Behrens and Brian 
 Waldon to use the object-oriented Server approach that Glance/Keystone/Nova 
 uses) here:
 
 https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
 
 There is a worker = XXX configuration option that controls the number of 
 worker processes created on server startup. A worker value of 0 indicates to 
 run identically to the way Nova currently runs (one process with an eventlet 
 pool of greenthreads)

This would be excellent to add to nova as an option for performance reasons.  
Especially since you can fallback to the 0 version. I'm always concerned with 
mixing threading and eventlet as it leads to really odd bugs, but it sounds 
like HP has vetted it.  If we keep 0 as the default I don't see any reason why 
it couldn't be added.

Vish___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-02 Thread Vishvananda Ishaya

On Mar 2, 2012, at 2:11 PM, Duncan McGreggor wrote:

 On Fri, Mar 2, 2012 at 4:10 PM, Monsyne Dragon mdra...@rackspace.com wrote:
 
 
 Has anyone thought about switching to gevent?   It's similar enough to 
 eventlet that the port shouldn't be too bad, and because it's event loop is 
 in C, (libevent), there are C mysql drivers (ultramysql) that will work with 
 it without blocking.
 
 We've been exploring this possibility at DreamHost, and chatted with
 some other stackers about it at various meat-space venues. Fwiw, it's
 something we'd be very interested in supporting (starting with as much
 test coverage as possible of eventlet's current use in OpenStack, to
 ensure as pain-free a transition as possible).
 
 d

I would be for an experimental try at this.  Based on the experience of 
starting with twisted and moving to eventlet, I can almost guarantee that we 
will run into a new set of issues.  Concurrency is difficult no matter which 
method/library you use and each change brings a new set of challenges.

That said, gevent is similar enough to eventlet that I think we will at least 
be dealing with the same class of problems, so it might be less painful than 
moving to something totally different like threads, multiprocessing, or (back 
to) twisted. If there were significant performance benefits to switching, it 
would be worth exploring.

I wouldn't want to devote a huge amount of time to this unless we see a 
significant reason to switch, so hopefully Jay gets around to testing it out.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Tempest stable/diablo status

2012-03-02 Thread David Kranz
I have filed nova bugs for all the failures in diablo/stable and posted 
the changes to make everything pass except for the glance dependency 
which is covered by https://bugs.launchpad.net/tempest/+bug/944410. We 
decided to make the negative tests (that fail due to a wrong error code) 
succeed with the current broken diablo behavior so that they will fail, 
and can be changed, when/if the bugs are ever fixed.


 -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp