Re: [Openstack] nova-compute cannot be started.

2012-03-05 Thread Jesse Andrews
That looks like a line from devstack.

I just did a fresh install of oneiric and ran devstack (kvm) and
didn't see this issue.

Any details?
Jesse

On Sun, Mar 4, 2012 at 11:12 PM, Shang Wu sh...@ubuntu.com wrote:
 What is the environment that you used to deploy this? Did you specify the
 connection_type in the nova.conf file?



 On 12-03-05 01:53 PM, DeadSun wrote:

 when I use sg libvirtd /data/stack/nova/bin/nova-compute 

 it show:

 stack@n-cpu-1:~/nova$ 2012-03-05 13:47:56 CRITICAL nova [-] Unknown
 connection type None
 (nova): TRACE: Traceback (most recent call last):
 (nova): TRACE: File /data/stack/nova/bin/nova-compute, line 47, in
 module
 (nova): TRACE: server = service.Service.create(binary='nova-compute')
 (nova): TRACE: File /data/stack/nova/nova/service.py, line 241, in create
 (nova): TRACE: report_interval, periodic_interval)
 (nova): TRACE: File /data/stack/nova/nova/service.py, line 150, in
 __init__
 (nova): TRACE: self.manager = manager_class(host=self.host, *args, **kwargs)
 (nova): TRACE: File /data/stack/nova/nova/compute/manager.py, line 198, in
 __init__
 (nova): TRACE: utils.import_object(compute_driver),
 (nova): TRACE: File /data/stack/nova/nova/utils.py, line 87, in
 import_object
 (nova): TRACE: return cls()
 (nova): TRACE: File /data/stack/nova/nova/virt/connection.py, line 82, in
 get_connection
 (nova): TRACE: raise Exception('Unknown connection type %s' % t)
 (nova): TRACE: Exception: Unknown connection type None
 (nova): TRACE:


 how can fix it?

 --
 非淡薄无以明志,非宁静无以致远


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 --
 Shang Wu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Projects Development History Visualisation

2012-03-05 Thread Razique Mahroua
Waw - fantastic videos !Awesome :)
Nuage  Co - Razique Mahrouarazique.mahr...@gmail.com

Le 5 mars 2012 à 03:51, Armaan a écrit :Hello,I have created few videos visualising the development history of various OpenStack projects. Links for the videos are given below:(1)Nova:  http://www.youtube.com/watch?gl=INv=l5PrjqezhgI

(2)Swift:  http://www.youtube.com/watch?v=vFO4ibrgWTs(3)Horizon: http://www.youtube.com/watch?v=5G27xsCrm7A

(4)Keystone: http://www.youtube.com/watch?v=aRDRNuFfYGo(5)Glance: http://www.youtube.com/watch?v=Gjl6izUjJs0

Best RegardsSyed Armani

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova-compute cannot be started.

2012-03-05 Thread DeadSun
yes, I miss the  connection_type  in nova.conf file because I am editing
the devstack scripts to deploy a mini installation of mutil nodes.

Thanks all.

2012/3/5 Jesse Andrews anotherje...@gmail.com

 That looks like a line from devstack.

 I just did a fresh install of oneiric and ran devstack (kvm) and
 didn't see this issue.

 Any details?
 Jesse

 On Sun, Mar 4, 2012 at 11:12 PM, Shang Wu sh...@ubuntu.com wrote:
  What is the environment that you used to deploy this? Did you specify the
  connection_type in the nova.conf file?
 
 
 
  On 12-03-05 01:53 PM, DeadSun wrote:
 
  when I use sg libvirtd /data/stack/nova/bin/nova-compute 
 
  it show:
 
  stack@n-cpu-1:~/nova$ 2012-03-05 13:47:56 CRITICAL nova [-] Unknown
  connection type None
  (nova): TRACE: Traceback (most recent call last):
  (nova): TRACE: File /data/stack/nova/bin/nova-compute, line 47, in
  module
  (nova): TRACE: server = service.Service.create(binary='nova-compute')
  (nova): TRACE: File /data/stack/nova/nova/service.py, line 241, in
 create
  (nova): TRACE: report_interval, periodic_interval)
  (nova): TRACE: File /data/stack/nova/nova/service.py, line 150, in
  __init__
  (nova): TRACE: self.manager = manager_class(host=self.host, *args,
 **kwargs)
  (nova): TRACE: File /data/stack/nova/nova/compute/manager.py, line
 198, in
  __init__
  (nova): TRACE: utils.import_object(compute_driver),
  (nova): TRACE: File /data/stack/nova/nova/utils.py, line 87, in
  import_object
  (nova): TRACE: return cls()
  (nova): TRACE: File /data/stack/nova/nova/virt/connection.py, line 82,
 in
  get_connection
  (nova): TRACE: raise Exception('Unknown connection type %s' % t)
  (nova): TRACE: Exception: Unknown connection type None
  (nova): TRACE:
 
 
  how can fix it?
 
  --
  非淡薄无以明志,非宁静无以致远
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
  --
  Shang Wu
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 




-- 
非淡薄无以明志,非宁静无以致远
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Projects Development History Visualisation

2012-03-05 Thread Armaan
 Thanks everyone, i am delighted that you liked them.
 @Jake Dahn: I used gource  http://code.google.com/p/gource/

Best Regards
Syed Armani




On Mon, Mar 5, 2012 at 8:39 AM, Jake Dahn j...@ansolabs.com wrote:

 These are awesome!

 How did you make them?

 On Mar 4, 2012, at 6:51 PM, Armaan wrote:


 Hello,

 I have created few videos visualising the development history of various
 OpenStack projects. Links for the videos are given below:

 (1)Nova:   http://www.youtube.com/watch?gl=INv=l5PrjqezhgI
 (2)Swift:   http://www.youtube.com/watch?v=vFO4ibrgWTs
 (3)Horizon:  http://www.youtube.com/watch?v=5G27xsCrm7A
 (4)Keystone: http://www.youtube.com/watch?v=aRDRNuFfYGo
 (5)Glance:  http://www.youtube.com/watch?v=Gjl6izUjJs0

 Best Regards
 Syed Armani

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Projects Development History Visualisation

2012-03-05 Thread Jesse Andrews
It would be neat to see one with all the projects together - in HD.

Since many contributors work on all the projects, we would see people
zooming all around the screen.

Is this possible?

On Mon, Mar 5, 2012 at 3:03 AM, Armaan dce3...@gmail.com wrote:


  Thanks everyone, i am delighted that you liked them.
  @Jake Dahn: I used gource  http://code.google.com/p/gource/

 Best Regards
 Syed Armani




 On Mon, Mar 5, 2012 at 8:39 AM, Jake Dahn j...@ansolabs.com wrote:

 These are awesome!

 How did you make them?

 On Mar 4, 2012, at 6:51 PM, Armaan wrote:


 Hello,

 I have created few videos visualising the development history of various
 OpenStack projects. Links for the videos are given below:

 (1)Nova:   http://www.youtube.com/watch?gl=INv=l5PrjqezhgI
 (2)Swift:   http://www.youtube.com/watch?v=vFO4ibrgWTs
 (3)Horizon:  http://www.youtube.com/watch?v=5G27xsCrm7A
 (4)Keystone: http://www.youtube.com/watch?v=aRDRNuFfYGo
 (5)Glance:  http://www.youtube.com/watch?v=Gjl6izUjJs0

 Best Regards
 Syed Armani

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Doc Day March 6th - the plan the plan

2012-03-05 Thread Yuriy Taraday
Hello!

I am to start writing documentation on Nexenta driver, but I don't see
any place where I should add it. Am I missing something? Are there any
documentation on nova-volume?


Kind regards, Yuriy.



On Sat, Mar 3, 2012 at 02:30, Anne Gentle a...@openstack.org wrote:
 Hi all -

 Please consider this your invitation to take the day on Tuesday to
 work on docs, for any audience, but with special attention to the top
 priority gaps for Essex.

 I've started an Etherpad to work through the priorities list for Doc
 Day, feel free to edit as you see fit.
 http://etherpad.openstack.org/DocDayMarch6

 Also the doc bugs list is ready for the day, pick anything up on here
 that's not In progress and go to town.
 https://bugs.launchpad.net/openstack-manuals

 I'd be happy to walk through doc processes the morning of the Doc Day,
 and I'll be on IRC all day to answer questions. I'll be in an
 #openstack-docday channel for the day.

 In person, I'll be ensconced in the lovely Rackspace San Francisco
 offices. You can RSVP at
 http://www.meetup.com/openstack/events/52466462/ if you want to join
 us during the day. Thanks to all the interest and discussion already!

 Looking forward to another great doc day. I'm feeling especially good
 about the state of this release and appreciative of all the hard work
 so far.

 Thanks,
 Anne

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-05 Thread Kapil Thangavelu
Excerpts from Mark Washenberger's message of 2012-03-04 23:34:03 -0500:
 While we are on the topic of api performance and the database, I have a
 few thoughts I'd like to share.
 
 TL;DR:
 - we should consider refactoring our wsgi server to leverage multiple
   processors
 - we could leverage compute-cell database responsibility separataion
   to speedup our api database performance by several orders of magnitude
 
 I think the main way eventlet holds us back right now is that we have
 such low utilization. The big jump with multiprocessing or threading
 would be the potential to leverage more powerful hardware. Currently
 nova-api probably wouldn't run any faster on bare metal than it would
 run on an m1.tiny. Of course, this isn't an eventlet limitation per se
 but rather we are limiting ourselves to eventlet single-processing
 performance with our wsgi server implementation.


This seems fairly easily remedied without code changes via usage of something 
like gunicorn (in multi-process single socket mode as wsgi frontend), or any 
generic 
load balancer against multiple processes. But its of limited utility unless the 
individual processes can handle concurrency scenarios greater than 1.

I'm a bit skeptical about the use of multiprocessing, it imposes its own set of 
constraints and problems. Interestingly using like zmq (again with its own 
issues, but more robust imo than multiprocessing) allows for transparency from 
single process ipc to network ipc without the file handle, event loop 
inheritance concerns of something like multprocessing.


 
 However, the greatest performance improvement I see would come from
 streamlining the database interactions incurred on each nova-api
 request. We have been pretty fast-and-loose with adding database
 and glance calls to the openstack api controllers and compute api.
 I am especially thinking of the extension mechanism, which tends
 to require another database call for each /servers extension a
 deployer chooses to enable.
 
 But, if we think in ideal terms, each api request should perform
 no more than 1 database call for queries, and no more than 2 db calls
 for commands (validation + initial creation). In addition, I can
 imagine an implementation where these database calls don't have any
 joins, and involve no more than one network roundtrip.


is there any debug tooling around api endpoints that can identify these calls 
ala some of the wsgi middleware targeted towards web apps (ie. debugtoolbars).

 
 Beyond refactoring the way we add in data for response extensions,
 I think the right way to get this database performance is make the
 compute-cells approach the normal. In this approach, there are
 at least two nova databases, one which lives along with the nova-api
 nodes, and one that lives in a compute cell. The api database is kept
 up to date through asynchronous updates that bubble up from the
 compute cells. With this separation, we are free to tailor the schema
 of the api database to match api performance needs, while we tailor
 the schema of the compute cell database to the operational requirements
 of compute workers. In particular, we can completely denormalize the
 tables in the api database without creating unpleasant side effects
 in the compute manager code. This denormalization both means fewer
 database interactions and fewer joins (which likely matters for larger
 deployments).
 
 If we partner this streamlining and denormalization approach with
 similar attentions to glance performance and an rpc implementation
 that writes to disk and returns, processing network activities in
 the background, I think we could get most api actions to  10 ms on
 reasonable hardware. 
 
 As much as the initial push on compute-cells is about scale, I think
 it could enable major performance improvements directly on its heels
 during the fulsom cycle. This is something I'd love to talk about more
 at the conference if anyone has any interest.
 

sounds interesting, but potentially complex, with schema and data drift 
possibilities.

cheers,

Kapil

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Projects Development History Visualisation

2012-03-05 Thread Ziad Sawalha
Really cool! Thanks, Syed.

We should have these running at the keynote at the conference while everyone is 
waiting to get started :-)

From: Armaan dce3...@gmail.commailto:dce3...@gmail.com
Date: Mon, 5 Mar 2012 08:21:54 +0530
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] OpenStack Projects Development History Visualisation


Hello,

I have created few videos visualising the development history of various 
OpenStack projects. Links for the videos are given below:

(1)Nova:   http://www.youtube.com/watch?gl=INv=l5PrjqezhgI
(2)Swift:   http://www.youtube.com/watch?v=vFO4ibrgWTs
(3)Horizon:  http://www.youtube.com/watch?v=5G27xsCrm7A
(4)Keystone: http://www.youtube.com/watch?v=aRDRNuFfYGo
(5)Glance:  http://www.youtube.com/watch?v=Gjl6izUjJs0

Best Regards
Syed Armani

___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-05 Thread Day, Phil
Hi Yun,

The point of the sleep(0) is to explicitly yield from a long running eventlet 
to so that other eventlets aren't blocked for a long period.   Depending on how 
you look at that either means we're making an explicit judgement on priority, 
or trying to provide a more equal sharing of run-time across eventlets.

It's not that things are CPU bound as such - more just that eventlets have 
every few pre-emption points.Even an IO bound activity like creating a 
snapshot won't cause an eventlet switch.

So in terms of priority we're trying to get to the state where:
 - Important periodic events (such as service status) run when expected  (if 
these take a long time we're stuffed anyway)
 - User initiated actions don't get blocked by background system eventlets 
(such as refreshing power-state)
- Slow action from one user don't block actions from other users (the first 
user will expect their snapshot to take X seconds, the second one won't expect 
their VM creation to take X + Y seconds).

It almost feels like the right level of concurrency would be to have a 
task/process running for each VM, so that there is concurrency across 
un-related VMs, but serialisation for each VM.

Phil 

-Original Message-
From: Yun Mao [mailto:yun...@gmail.com] 
Sent: 02 March 2012 20:32
To: Day, Phil
Cc: Chris Behrens; Joshua Harlow; openstack
Subject: Re: [Openstack] eventlet weirdness

Hi Phil, I'm a little confused. To what extend does sleep(0) help?

It only gives the greenlet scheduler a chance to switch to another green 
thread. If we are having a CPU bound issue, sleep(0) won't give us access to 
any more CPU cores. So the total time to finish should be the same no matter 
what. It may improve the fairness among different green threads but shouldn't 
help the throughput. I think the only apparent gain to me is situation such 
that there is 1 green thread with long CPU time and many other green threads 
with small CPU time.
The total finish time will be the same with or without sleep(0), but with sleep 
in the first threads, the others should be much more responsive.

However, it's unclear to me which part of Nova is very CPU intensive.
It seems that most work here is IO bound, including the snapshot. Do we have 
other blocking calls besides mysql access? I feel like I'm missing something 
but couldn't figure out what.

Thanks,

Yun


On Fri, Mar 2, 2012 at 2:08 PM, Day, Phil philip@hp.com wrote:
 I didn't say it was pretty - Given the choice I'd much rather have a 
 threading model that really did concurrency and pre-emption all the right 
 places, and it would be really cool if something managed the threads that 
 were started so that is a second conflicting request was received it did some 
 proper tidy up or blocking rather than just leaving the race condition to 
 work itself out (then we wouldn't have to try and control it by checking 
 vm_state).

 However ...   In the current code base where we only have user space based 
 eventlets, with no pre-emption, and some activities that need to be 
 prioritised then forcing pre-emption with a sleep(0) seems a pretty small bit 
 of untidy.   And it works now without a major code refactor.

 Always open to other approaches ...

 Phil


 -Original Message-
 From: openstack-bounces+philip.day=hp@lists.launchpad.net 
 [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On 
 Behalf Of Chris Behrens
 Sent: 02 March 2012 19:00
 To: Joshua Harlow
 Cc: openstack; Chris Behrens
 Subject: Re: [Openstack] eventlet weirdness

 It's not just you


 On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

 Does anyone else feel that the following seems really dirty, or is it just 
 me.

 adding a few sleep(0) calls in various places in the Nova codebase 
 (as was recently added in the _sync_power_states() periodic task) is 
 an easy and simple win with pretty much no ill side-effects. :)

 Dirty in that it feels like there is something wrong from a design point of 
 view.
 Sprinkling sleep(0) seems like its a band-aid on a larger problem imho.
 But that's just my gut feeling.

 :-(

 On 3/2/12 8:26 AM, Armando Migliaccio armando.migliac...@eu.citrix.com 
 wrote:

 I knew you'd say that :P

 There you go: https://bugs.launchpad.net/nova/+bug/944145

 Cheers,
 Armando

  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 02 March 2012 16:22
  To: Armando Migliaccio
  Cc: openstack@lists.launchpad.net
  Subject: Re: [Openstack] eventlet weirdness
 
  On 03/02/2012 10:52 AM, Armando Migliaccio wrote:
   I'd be cautious to say that no ill side-effects were introduced. 
   I found a
  race condition right in the middle of sync_power_states, which I 
  assume was exposed by breaking the task deliberately.
 
  Such a party-pooper! ;)
 
  Got a link to the bug report for me?
 
  Thanks!
  -jay

 ___
 Mailing list: https://launchpad.net/~openstack Post to     : 
 

Re: [Openstack] eventlet weirdness

2012-03-05 Thread Day, Phil
 However I'd like to point out that the math below is misleading (the average 
 time for the non-blocking case is also miscalculated but 
 it's not my point). The number that matters more in real life is throughput. 
 For the blocking case it's 3/30 = 0.1 request per second.

I think it depends on whether you are trying to characterise system performance 
(processing time) or perceived user experience (queuing time + processing 
time).   My users are kind of selfish in that they don't care how many 
transactions per second I can get through,  just how long it takes for them to 
get a response from when they submit the request.

Making the DB calls non-blocking does help a very small bit in driving up API 
server utilisation  - but my point was that time spent in the DB is such a 
small part of the total time in the API server that it's not the thing that 
needs to be optimised first. 

Any queuing system will explode when its utilisation approaches 100%, blocking 
or not.   Moving to non-blocking just means that you can hit 100% utilisation 
in the API server with 2 concurrent requests instead of *only* being able to 
hit 90+% with one transition.   That's not a great leap forward in my 
perception.

Phil

-Original Message-
From: Yun Mao [mailto:yun...@gmail.com] 
Sent: 03 March 2012 01:11
To: Day, Phil
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

First I agree that having blocking DB calls is no big deal given the way Nova 
uses mysql and reasonably powerful db server hardware.

However I'd like to point out that the math below is misleading (the average 
time for the nonblocking case is also miscalculated but it's not my point). The 
number that matters more in real life is throughput. For the blocking case it's 
3/30 = 0.1 request per second.
For the non-blocking case it's 3/27=0.11 requests per second. That means if 
there is a request coming in every 9 seconds constantly, the blocking system 
will eventually explode but the nonblocking system can still handle it. 
Therefore, the non-blocking one should be preferred.
Thanks,

Yun


 For example in the API server (before we made it properly multi-threaded) 
 with blocking db calls the server was essentially a serial processing queue - 
 each request was fully processed before the next.  With non-blocking db calls 
 we got a lot more apparent concurrencybut only at the expense of making all 
 of the requests equally bad.

 Consider a request takes 10 seconds, where after 5 seconds there is a call to 
 the DB which takes 1 second, and three are started at the same time:

 Blocking:
 0 - Request 1 starts
 10 - Request 1 completes, request 2 starts
 20 - Request 2 completes, request 3 starts
 30 - Request 3 competes
 Request 1 completes in 10 seconds
 Request 2 completes in 20 seconds
 Request 3 completes in 30 seconds
 Ave time: 20 sec


 Non-blocking
 0 - Request 1 Starts
 5 - Request 1 gets to db call, request 2 starts
 10 - Request 2 gets to db call, request 3 starts
 15 - Request 3 gets to db call, request 1 resumes
 19 - Request 1 completes, request 2 resumes
 23 - Request 2 completes,  request 3 resumes
 27 - Request 3 completes

 Request 1 completes in 19 seconds  (+ 9 seconds) Request 2 completes 
 in 24 seconds (+ 4 seconds) Request 3 completes in 27 seconds (- 3 
 seconds) Ave time: 20 sec

 So instead of worrying about making db calls non-blocking we've been working 
 to make certain eventlets non-blocking - i.e. add sleep(0) calls to long 
 running iteration loops - which IMO has a much bigger impact on the 
 performance of the apparent latency of the system. Thanks for the 
 explanation. Let me see if I understand this.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Memory leaks from greenthreads

2012-03-05 Thread Pádraig Brady
On 02/29/2012 09:08 PM, Johannes Erdfelt wrote:
 On Wed, Feb 29, 2012, Vishvananda Ishaya vishvana...@gmail.com wrote:
 We have had a memory leak due to an interaction with eventlet for a
 while that Johannes has just made a fix for.

 bug:
 https://bugs.launchpad.net/nova/+bug/903199

 fix:
 https://bitbucket.org/which_linden/eventlet/pull-request/10/monkey-patch-threadingcurrent_thread-as

 Unfortuantely, I don' t think we have a decent workaround for nova
 while that patch is upstreamed.  I wanted to make sure that all of
 the distros are aware of it in case they want to carry an eventlet
 patch to prevent the slow memory leak.
 
 There is one other possible workaround, but I didn't feel like it was
 safe since we would likely need to audit all of the third party libraries
 to ensure they don't cause problems.
 
 The memory leak only happens when monkey patching the
 thread/threading/Queue modules. I looked at the nova sources and did
 some tests and it doesn't appear nova needs those modules patches.
 
 However, third party modules might need it. Also, I only tested on
 xenapi. libvirt and/or vmwareapi might have problems. Or possibly other
 drivers (firewall, volume, etc) in nova that I didn't use in my tests.
 
 If you're having problems with the memory leak in eventlet and applying
 the patch isn't an option, then monkey patching everything but thread
 might something worth trying.
 
 eventlet.monkey_patch(os=True, socket=True, time=True)

Note I tried the patch but got errors

# rpm -qa python-greenlet python-eventlet
python-eventlet-0.9.16-4.fc17.noarch
python-greenlet-0.3.1-9.fc17.x86_64

# /usr/bin/nova-cert --flagfile /dev/null --config-file /etc/nova/nova.conf 
--logfile /var/log/nova/cert.log
2012-03-05 15:09:09 AUDIT nova.service [-] Starting cert node (version 
2012.1-LOCALBRANCH:LOCALREVISION)
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 336, in 
fire_timers
timer()
  File /usr/lib/python2.7/site-packages/eventlet/hubs/timer.py, line 56, in 
__call__
cb(*args, **kw)
  File /usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 192, in 
main
result = function(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/nova/service.py, line 101, in 
run_server
server.start()
  File /usr/lib/python2.7/site-packages/nova/service.py, line 176, in start
self.conn = rpc.create_connection(new=True)
  File /usr/lib/python2.7/site-packages/nova/rpc/__init__.py, line 47, in 
create_connection
return _get_impl().create_connection(new=new)
  File /usr/lib/python2.7/site-packages/nova/rpc/impl_qpid.py, line 507, in 
create_connection
return rpc_amqp.create_connection(new, Connection.pool)
  File /usr/lib/python2.7/site-packages/nova/rpc/amqp.py, line 310, in 
create_connection
return ConnectionContext(connection_pool, pooled=not new)
  File /usr/lib/python2.7/site-packages/nova/rpc/amqp.py, line 84, in __init__
server_params=server_params)
  File /usr/lib/python2.7/site-packages/nova/rpc/impl_qpid.py, line 294, in 
__init__
self.connection = qpid.messaging.Connection(self.broker)
  File /usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py, line 
178, in __init__
self._driver = Driver(self)
  File /usr/lib/python2.7/site-packages/qpid/messaging/driver.py, line 347, 
in __init__
self._selector = Selector.default()
  File /usr/lib/python2.7/site-packages/qpid/selector.py, line 54, in default
sel.start()
  File /usr/lib/python2.7/site-packages/qpid/selector.py, line 99, in start
self.thread = Thread(target=self.run)
  File /usr/lib64/python2.7/threading.py, line 446, in __init__
self.__daemonic = self._set_daemon()
  File /usr/lib64/python2.7/threading.py, line 470, in _set_daemon
return current_thread().daemon
AttributeError: '_GreenThread' object has no attribute 'daemon'
2012-03-05 15:09:10 CRITICAL nova [-] '_GreenThread' object has no attribute 
'daemon'
Exception KeyError: KeyError(139994820976048,) in module 'threading' from 
'/usr/lib64/python2.7/threading.pyc' ignored
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File /usr/lib64/python2.7/atexit.py, line 24, in _run_exitfuncs
func(*targs, **kargs)
  File /usr/lib/python2.7/site-packages/qpid/selector.py, line 138, in stop
self.thread.join(timeout)
AttributeError: 'NoneType' object has no attribute 'join'
Error in sys.exitfunc:
2012-03-05 15:09:10 CRITICAL nova [-] 'NoneType' object has no attribute 'join'

cheers,
Pádraig.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Memory leaks from greenthreads

2012-03-05 Thread Johannes Erdfelt
On Mon, Mar 05, 2012, Pádraig Brady p...@draigbrady.com wrote:
   File /usr/lib64/python2.7/threading.py, line 446, in __init__
 self.__daemonic = self._set_daemon()
   File /usr/lib64/python2.7/threading.py, line 470, in _set_daemon
 return current_thread().daemon
 AttributeError: '_GreenThread' object has no attribute 'daemon'
 2012-03-05 15:09:10 CRITICAL nova [-] '_GreenThread' object has no attribute 
 'daemon'

That was fixed a couple of days ago. Can you confirm you have the latest
patch from the pull request?

JE


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Projects Development History Visualisation

2012-03-05 Thread Duncan McGreggor
+1 :-)

d

On Mon, Mar 5, 2012 at 10:30 AM, Ziad Sawalha
ziad.sawa...@rackspace.com wrote:
 Really cool! Thanks, Syed.

 We should have these running at the keynote at the conference while everyone
 is waiting to get started :-)

 From: Armaan dce3...@gmail.com
 Date: Mon, 5 Mar 2012 08:21:54 +0530
 To: openstack@lists.launchpad.net
 Subject: [Openstack] OpenStack Projects Development History Visualisation


 Hello,

 I have created few videos visualising the development history of various
 OpenStack projects. Links for the videos are given below:

 (1)Nova:   http://www.youtube.com/watch?gl=INv=l5PrjqezhgI
 (2)Swift:   http://www.youtube.com/watch?v=vFO4ibrgWTs
 (3)Horizon:  http://www.youtube.com/watch?v=5G27xsCrm7A
 (4)Keystone: http://www.youtube.com/watch?v=aRDRNuFfYGo
 (5)Glance:  http://www.youtube.com/watch?v=Gjl6izUjJs0

 Best Regards
 Syed Armani

 ___ Mailing list:
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack More help :
 https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-05 Thread Eric Windisch
 an rpc implementation that writes to disk and returns,

A what? I'm not sure what problem you're looking to solve here or what you 
think the RPC mechanism should do. Perhaps you're speaking of a Kombu or AMQP 
specific improvement?

There is no absolute need for persistence or durability in RPC. I've done quite 
a bit of analysis of this requirement and it simply isn't necessary. There is 
some need in AMQP for this due to implementation-specific issues, but not 
necessarily unsolvable. However, these problems simply do not exist for all RPC 
implementations...

-- 
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack PPA team

2012-03-05 Thread Jay Pipes

On 03/05/2012 09:05 AM, Thierry Carrez wrote:

Thierry Carrez wrote:

To have a rough idea of what we plan to do, you can have a look at the
bugs at [3]. In particular, we plan to deprecate the release PPAs
since they carry a false expectation of being maintained with stable
branch updates and be production-ready. We also plan to regroup the PPAs
(avoid having separate PPAs per project) to avoid duplication of effort
and stale PPAs.


Some progress here:

As a first step, as of E4 we completed the creation of the new common
PPAs under ~openstack-ppa. You can see pointers to the new PPA structure at:

http://wiki.openstack.org/PPAs

Next steps are to add keystone and horizon to those common PPAs.

We'll proceed with deprecating the old ppa:PROJECT-core/* PPAs as well
as the ppa:openstack-release/* PPAs soon.


Thanks Thierry, this is very helpful.

Cheers,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Projects Development History Visualisation

2012-03-05 Thread Thierry Carrez
Duncan McGreggor wrote:
 +1 :-)
 
 d
 
 On Mon, Mar 5, 2012 at 10:30 AM, Ziad Sawalha
 ziad.sawa...@rackspace.com wrote:
 Really cool! Thanks, Syed.

 We should have these running at the keynote at the conference while everyone
 is waiting to get started :-)

If Syed makes a common version that runs for a few minutes in HD, as
Jesse suggested, I'll make sure we use that for the summit at least :)

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Governance Elections Spring 2012 Results

2012-03-05 Thread Stefano Maffulli
In case you missed the announcement over the weekend on the blog
http://www.openstack.org/blog/2012/03/openstack-governance-elections-spring-2012-results/

the OpenStack community has elected the Project Technical Leads and two
members of the Project Policy Board. Here are the winners:


NOVA Project Technical Lead (1 position)

  * Official poll results

http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/control.pl?id=E_db67df60654f6c8ekey=83b8ba2aeca3c8fdakey=f52079a1711dd46e


Winner:


  * Vish Ishaya

http://wiki.openstack.org/Governance/ElectionsSpring2012/Vishvananda_Ishaya

KEYSTONE Project Technical Lead (1 position)

  * Official poll results

http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/control.pl?id=E_b16565f593d90e7akey=6aa1fc41112e0ab9akey=832fde7cb1ac9844


Winner:


  * Joe Heck
http://wiki.openstack.org/Governance/ElectionsSpring2012/Joe_Heck

HORIZON Project Technical Lead (1 position)

  * Official poll results

http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/control.pl?id=E_ddfa1b2ba9081972key=2061d1424429848eakey=3d9462bb60a3e4a5


Winner:


  * Devin Carlen
http://wiki.openstack.org/Governance/ElectionsSpring2012/Devin_Carlen

SWIFT Project Technical Lead (1 position)

  * Only one candidate


Winner:


  * John Dickinson
http://wiki.openstack.org/Governance/ElectionsSpring2012/John_Dickinson

GLANCE Project Technical Lead (1 position)

  * Official poll results

http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/control.pl?id=E_1ad8e80d4a64970bkey=2b5635216968b81cakey=49d4120cc138d257


Winner:


  * Brian Waldon
http://wiki.openstack.org/Governance/ElectionsSpring2012/Brian_Waldon

PROJECT POLICY BOARD (2 positions)

  * Official poll results

http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/control.pl?id=E_149094a7291d1bafkey=2a1372221dee1584akey=b9d9bcbc90bb7d75


Winners:


  * Thierry Carrez
http://wiki.openstack.org/Governance/ElectionsSpring2012/Thierry_Carrez
  * Jay Pipes


Congratulations to you all! Good work everybody.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Invitation to work on Essex Deployments, Thursday 3/8

2012-03-05 Thread Rob_Hirschfeld
Stackers,

The Dell OpenStack team is coordinating a world-wide effort to work on Essex 
deployments this coming Thursday, 3/8.

We're organizing this via OpenStack meetups in Austin  Boston.   There is 
substantial opportunity for the community to work together on deployment issues 
around operating systems, platforms and components.  For this effort, my team 
is preparing to ensure participants can be immediately productive testing 
multi-node Essex installs on Ubuntu 12.04 + KVM + Chef Cookbooks + Crowbar.  
We're hoping to have a broad range of people focused on this so we can resolve 
deployment related issues together.

We'd like to expand event to include more OpenStack deployers!

We've already got commitments from other groups to participate.  I won't speak 
for them here - they can reply and add their own comments.

Sorry for the late notice.  We started from Crowbar Essex and expanded as when 
interest grew so we've been coordinating the event on: 
https://github.com/dellcloudedge/crowbar/wiki/Install-Fest-Prep.  If there is 
interest from the OpenStack list please add comments to this thread.  I'd be 
happy move this to the OpenStack forums.

Thanks,

Rob

PS: Reminder that tomorrow (3/6) is the OpenStack Documentation Day!

__
Rob Hirschfeld
Principal Cloud Solution Architect
Dell | Cloud Edge, Data Center Solutions
blog robhirschfeld.com, twitter @zehicle

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Google Summer of Code-2012

2012-03-05 Thread Vishvananda Ishaya
I added myself.  I hope we get a few more volunteers.

Vish

On Mar 5, 2012, at 6:31 AM, Russell Bryant wrote:

 On 02/27/2012 08:32 AM, Russell Bryant wrote:
 On 02/09/2012 04:27 PM, Thierry Carrez wrote:
 Thierry Carrez wrote:
 You should start a wiki page to collect mentors and the subjects they
 propose... and based on how many we get, see if our application is
 warranted.
 
 Here it is:
 http://wiki.openstack.org/GSoC2012
 
 
 Organizations must submit their application for GSoC 2012 sometime
 between today and March 9th.  Nobody has signed up on the above wiki
 page to volunteer as a mentor.  If you are interested in being a mentor,
 please put your name on the wiki sometime this week so we can decide if
 it makes sense to participate.
 
 Only one person has signed up as interested in being a mentor, so it
 looks like it's not worth our time putting in an application.
 
 -- 
 Russell Bryant
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Google Summer of Code-2012

2012-03-05 Thread Anne Gentle
Okay, looks like the Mentoring organization application deadline is
this Friday, March 9th.

What needs to be done in time to make this important deadline and how
can I help?

Anne

On Mon, Mar 5, 2012 at 1:07 PM, Debo Dutta (dedutta) dedu...@cisco.com wrote:
 +1 

 I can add 1-2 more projects ... I think we shouldn't lose the opp.

 debo

 -Original Message-
 From: openstack-bounces+dedutta=cisco@lists.launchpad.net
 [mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On
 Behalf Of Vishvananda Ishaya
 Sent: Monday, March 05, 2012 11:02 AM
 To: Russell Bryant
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Google Summer of Code-2012

 I added myself.  I hope we get a few more volunteers.

 Vish

 On Mar 5, 2012, at 6:31 AM, Russell Bryant wrote:

 On 02/27/2012 08:32 AM, Russell Bryant wrote:
 On 02/09/2012 04:27 PM, Thierry Carrez wrote:
 Thierry Carrez wrote:
 You should start a wiki page to collect mentors and the subjects
 they
 propose... and based on how many we get, see if our application is
 warranted.

 Here it is:
 http://wiki.openstack.org/GSoC2012


 Organizations must submit their application for GSoC 2012 sometime
 between today and March 9th.  Nobody has signed up on the above wiki
 page to volunteer as a mentor.  If you are interested in being a
 mentor,
 please put your name on the wiki sometime this week so we can decide
 if
 it makes sense to participate.

 Only one person has signed up as interested in being a mentor, so it
 looks like it's not worth our time putting in an application.

 --
 Russell Bryant

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Google Summer of Code-2012

2012-03-05 Thread David Cramer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Mentoring strength is important and your ideas page is also very
important. Not having a good ideas page will drop you from
consideration as a mentoring org. You'll need a few ideas that have
some details and will get students excited.

David

On 03/05/2012 01:36 PM, Russell Bryant wrote:
 On 03/05/2012 02:17 PM, Anne Gentle wrote:
 Okay, looks like the Mentoring organization application deadline
 is this Friday, March 9th.
 
 What needs to be done in time to make this important deadline and
 how can I help?
 
 We need some minimum number of mentors to volunteer to make the
 overhead of participating in the program worthwhile.  That number
 is subjective ... I'd say it would be nice to have at least 4 or 5
 mentors volunteer.
 
 Aside from that, we need to fill out the application.  Anne, we
 could get together and work on it later this week.
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPVRxoAAoJEMHeSXG7afUhM20IAIIpQ9TcaQuvwJ6LrpZakV+J
pWyOawUf2lwhfbVOxKNThQupRwpu6j/MPA0btt9UVbwqnAqoo5YvvFZWuaXgwqtS
ayG/jlFtxSo4V+iZUgdznHN8wuO6t3NEZtEf8qm5HAf9/i5NDvnDMUXTIqFL93tE
3feED5/DuBpg79GQNLM39TMo3S29zhzE6S83ZzgeuGY9ZBN6/bji80Mn7PZfjfdT
7qQqnjs3anunKajVgHPKDYvWQO5Uo5hI4WK9g36kUMs/Bws8IauEb0wcnKI8DxRS
5zohCcFlNJmVO8QA3kMPOTDY4YI3ppLDMfEkRTfYFDimC8r2qbUcTWX6wZ9VB5w=
=/Mde
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Rationale behind using 'rpc:cast' instead of 'rpc:call' for the method run_instance

2012-03-05 Thread Nicolae Paladi
Hi,

this is my first posting in this mailing list, so if it's an RTFM question,
please
point me to the FM :-)

I would like to know what is the rationale behind using an rpc:cast from
scheduler/driver.py when e.g. launching an instance, while rpc.call in
driver.py is used only
for trivial methods, like 'compare_cpu'.

My guesses would be:
a. Launching an instance might take an arbitrarily long time and holding
the process alive
until the instance is launched is unfeasible (since it would consume too
much memory)
b. The call might take too long and it is not possible to specify a timeout
for the rpc.call method

I have noticed that in the trunk version in the module scheduler/api.py
there is a quite recent
change from 2012-02-29, where the method live_migration uses an rpc.call.
I assume, that  in this context, a migrating instance can have the same
timeout
behavior as a newly launched instance, so the difference in approaching
launch of an instance and
migration of an instance is unclear here.

The reason I am asking is that for my project (launching instances on
trusted TPM-enabled platforms)
 I would like to receive an acknowledgement from the compute node that the
instance has been launched.

Thank you,
/Nicolae.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Rationale behind using 'rpc:cast' instead of 'rpc:call' for the method run_instance

2012-03-05 Thread Vishvananda Ishaya
The use of cast is simply so we can return to the user more quickly instead of 
blocking waiting for a response.  There are some cases where failure handling 
is a little more complicated and is simplified by using a call.  The live 
migration is an example of this. It is much less frequently used than run 
instance, so the extra time due to using a call is an acceptable tradeoff.

Vish

On Mar 5, 2012, at 12:41 PM, Nicolae Paladi wrote:

 Hi, 
 
 this is my first posting in this mailing list, so if it's an RTFM question, 
 please
 point me to the FM :-)
 
 I would like to know what is the rationale behind using an rpc:cast from 
 scheduler/driver.py when e.g. launching an instance, while rpc.call in 
 driver.py is used only
 for trivial methods, like 'compare_cpu'. 
 
 My guesses would be: 
 a. Launching an instance might take an arbitrarily long time and holding the 
 process alive
 until the instance is launched is unfeasible (since it would consume too much 
 memory)
 b. The call might take too long and it is not possible to specify a timeout 
 for the rpc.call method
 
 I have noticed that in the trunk version in the module scheduler/api.py there 
 is a quite recent 
 change from 2012-02-29, where the method live_migration uses an rpc.call. 
 I assume, that  in this context, a migrating instance can have the same 
 timeout 
 behavior as a newly launched instance, so the difference in approaching 
 launch of an instance and 
 migration of an instance is unclear here.
 
 The reason I am asking is that for my project (launching instances on 
 trusted TPM-enabled platforms)
  I would like to receive an acknowledgement from the compute node that the 
 instance has been launched.
 
 Thank you, 
 /Nicolae.
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-05 Thread Yun Mao
Hi Phil,

My understanding is that, (forget Nova for a second) in a perfect
eventlet world, a green thread is either doing CPU intensive
computing, or wait in system calls that are IO related. In the latter
case, the eventlet scheduler will suspend the green thread and switch
to another green thread that is ready to run.

Back to reality, as you mentioned this is broken - some IO bound
activity won't cause an eventlet switch. To me the only possibility
that happens is the same reason those MySQL calls are blocking - we
are using C-based modules that don't respect monkey patch and never
yield. I'm suspecting that all libvirt based calls also belong to this
category.

Now if those blocking calls can finish in a very short of time (as we
assume for DB calls), then I think inserting a sleep(0) after every
blocking call should be a quick fix to the problem. But if it's a long
blocking call like the snapshot case, we are probably screwed anyway
and need OS thread level parallelism or multiprocessing to make it
truly non-blocking.. Thanks,

Yun

On Mon, Mar 5, 2012 at 10:43 AM, Day, Phil philip@hp.com wrote:
 Hi Yun,

 The point of the sleep(0) is to explicitly yield from a long running eventlet 
 to so that other eventlets aren't blocked for a long period.   Depending on 
 how you look at that either means we're making an explicit judgement on 
 priority, or trying to provide a more equal sharing of run-time across 
 eventlets.

 It's not that things are CPU bound as such - more just that eventlets have 
 every few pre-emption points.    Even an IO bound activity like creating a 
 snapshot won't cause an eventlet switch.

 So in terms of priority we're trying to get to the state where:
  - Important periodic events (such as service status) run when expected  (if 
 these take a long time we're stuffed anyway)
  - User initiated actions don't get blocked by background system eventlets 
 (such as refreshing power-state)
 - Slow action from one user don't block actions from other users (the first 
 user will expect their snapshot to take X seconds, the second one won't 
 expect their VM creation to take X + Y seconds).

 It almost feels like the right level of concurrency would be to have a 
 task/process running for each VM, so that there is concurrency across 
 un-related VMs, but serialisation for each VM.

 Phil

 -Original Message-
 From: Yun Mao [mailto:yun...@gmail.com]
 Sent: 02 March 2012 20:32
 To: Day, Phil
 Cc: Chris Behrens; Joshua Harlow; openstack
 Subject: Re: [Openstack] eventlet weirdness

 Hi Phil, I'm a little confused. To what extend does sleep(0) help?

 It only gives the greenlet scheduler a chance to switch to another green 
 thread. If we are having a CPU bound issue, sleep(0) won't give us access to 
 any more CPU cores. So the total time to finish should be the same no matter 
 what. It may improve the fairness among different green threads but shouldn't 
 help the throughput. I think the only apparent gain to me is situation such 
 that there is 1 green thread with long CPU time and many other green threads 
 with small CPU time.
 The total finish time will be the same with or without sleep(0), but with 
 sleep in the first threads, the others should be much more responsive.

 However, it's unclear to me which part of Nova is very CPU intensive.
 It seems that most work here is IO bound, including the snapshot. Do we have 
 other blocking calls besides mysql access? I feel like I'm missing something 
 but couldn't figure out what.

 Thanks,

 Yun


 On Fri, Mar 2, 2012 at 2:08 PM, Day, Phil philip@hp.com wrote:
 I didn't say it was pretty - Given the choice I'd much rather have a 
 threading model that really did concurrency and pre-emption all the right 
 places, and it would be really cool if something managed the threads that 
 were started so that is a second conflicting request was received it did 
 some proper tidy up or blocking rather than just leaving the race condition 
 to work itself out (then we wouldn't have to try and control it by checking 
 vm_state).

 However ...   In the current code base where we only have user space based 
 eventlets, with no pre-emption, and some activities that need to be 
 prioritised then forcing pre-emption with a sleep(0) seems a pretty small 
 bit of untidy.   And it works now without a major code refactor.

 Always open to other approaches ...

 Phil


 -Original Message-
 From: openstack-bounces+philip.day=hp@lists.launchpad.net
 [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On
 Behalf Of Chris Behrens
 Sent: 02 March 2012 19:00
 To: Joshua Harlow
 Cc: openstack; Chris Behrens
 Subject: Re: [Openstack] eventlet weirdness

 It's not just you


 On Mar 2, 2012, at 10:35 AM, Joshua Harlow wrote:

 Does anyone else feel that the following seems really dirty, or is it 
 just me.

 adding a few sleep(0) calls in various places in the Nova codebase
 (as was recently added in the _sync_power_states() 

Re: [Openstack] eventlet weirdness

2012-03-05 Thread Adam Young

On 03/05/2012 05:08 PM, Yun Mao wrote:

Hi Phil,

My understanding is that, (forget Nova for a second) in a perfect
eventlet world, a green thread is either doing CPU intensive
computing, or wait in system calls that are IO related. In the latter
case, the eventlet scheduler will suspend the green thread and switch
to another green thread that is ready to run.

Back to reality, as you mentioned this is broken - some IO bound
activity won't cause an eventlet switch. To me the only possibility
that happens is the same reason those MySQL calls are blocking - we
are using C-based modules that don't respect monkey patch and never
yield. I'm suspecting that all libvirt based calls also belong to this
category.


Agree.  I expect that to be the case of any native library.  Monkey 
patching only changes the Python side of the call,  anything in native 
code is too far along for it to be redirected.


Now if those blocking calls can finish in a very short of time (as we
assume for DB calls), then I think inserting a sleep(0) after every
blocking call should be a quick fix to the problem.
Nope.  The blocking call still blocks,  then it returns,  hits the 
sleep, and is scheduled.  The only option is to wrap it with a thread pool.


From an OS perspective,  there are no such things as greenthreads.  The 
same task_struct in the Linux Kernel (representing a Posix thread) that 
manages the body of the web application is used to process the IO.  The 
Linux thread  goes into a sleep state  until the IO comes back,  and the 
Kernel scheduler will schedule another OS process or task.  In order to 
get both the IO to complete and the  greenthread scheudler to process 
another greenthread,  you need to have two Posix threads.


If the libvirt API (or other Native API) has an async mode,  what you 
can do is provide a synchronos,  python based wrapper that does the 
following.


register_request callback()
async_call()
sleep()

The only time sleep() as called from Python code is going to help you is 
if you have a long running stretch of Python code, and you sleep()  in 
the middle of it.






But if it's a long
blocking call like the snapshot case, we are probably screwed anyway
and need OS thread level parallelism or multiprocessing to make it
truly non-blocking.. Thanks,


Yep.


Yun

On Mon, Mar 5, 2012 at 10:43 AM, Day, Philphilip@hp.com  wrote:

Hi Yun,

The point of the sleep(0) is to explicitly yield from a long running eventlet 
to so that other eventlets aren't blocked for a long period.   Depending on how 
you look at that either means we're making an explicit judgement on priority, 
or trying to provide a more equal sharing of run-time across eventlets.

It's not that things are CPU bound as such - more just that eventlets have 
every few pre-emption points.Even an IO bound activity like creating a 
snapshot won't cause an eventlet switch.

So in terms of priority we're trying to get to the state where:
  - Important periodic events (such as service status) run when expected  (if 
these take a long time we're stuffed anyway)
  - User initiated actions don't get blocked by background system eventlets 
(such as refreshing power-state)
- Slow action from one user don't block actions from other users (the first 
user will expect their snapshot to take X seconds, the second one won't expect 
their VM creation to take X + Y seconds).

It almost feels like the right level of concurrency would be to have a 
task/process running for each VM, so that there is concurrency across 
un-related VMs, but serialisation for each VM.

Phil

-Original Message-
From: Yun Mao [mailto:yun...@gmail.com]
Sent: 02 March 2012 20:32
To: Day, Phil
Cc: Chris Behrens; Joshua Harlow; openstack
Subject: Re: [Openstack] eventlet weirdness

Hi Phil, I'm a little confused. To what extend does sleep(0) help?

It only gives the greenlet scheduler a chance to switch to another green 
thread. If we are having a CPU bound issue, sleep(0) won't give us access to 
any more CPU cores. So the total time to finish should be the same no matter 
what. It may improve the fairness among different green threads but shouldn't 
help the throughput. I think the only apparent gain to me is situation such 
that there is 1 green thread with long CPU time and many other green threads 
with small CPU time.
The total finish time will be the same with or without sleep(0), but with sleep 
in the first threads, the others should be much more responsive.

However, it's unclear to me which part of Nova is very CPU intensive.
It seems that most work here is IO bound, including the snapshot. Do we have 
other blocking calls besides mysql access? I feel like I'm missing something 
but couldn't figure out what.

Thanks,

Yun


On Fri, Mar 2, 2012 at 2:08 PM, Day, Philphilip@hp.com  wrote:

I didn't say it was pretty - Given the choice I'd much rather have a threading 
model that really did concurrency and pre-emption all the right places, and it 
would be 

Re: [Openstack] eventlet weirdness

2012-03-05 Thread Devin Carlen
 If the libvirt API (or other Native API) has an async mode, what you
 can do is provide a synchronos, python based wrapper that does the 
 following.
 
 register_request callback()
 async_call()
 sleep()
 
 

This can be set up like a more traditional multi-threaded model as well.  You 
can eventlet.sleep while waiting for the callback handler to notify the 
greenthread.  This of course assumes your i/o and callback are running in a 
different pthread (eventlet.tpool is fine). So it looks more like:

condition = threading.Condition() # or something like it
register_request_callback(condition)
async_call()
condition.wait()


I found this post to be enormously helpful in understanding some of the nuances 
of dealing with green thread and process thread synchronization and 
communication:
 
http://blog.devork.be/2011/03/synchronising-eventlets-and-threads.html 

Devin

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Projects Development History Visualisation

2012-03-05 Thread Ewan Mellor
A DevStack one would be great too.  Thanks a lot, Syed, these are superb!

Ewan.

From: openstack-bounces+ewan.mellor=citrix@lists.launchpad.net 
[mailto:openstack-bounces+ewan.mellor=citrix@lists.launchpad.net] On Behalf 
Of Armaan
Sent: Monday, March 05, 2012 4:43 AM
To: Jesse Andrews
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Projects Development History Visualisation

Hi Jesse,

As you suggested, I am making one video combining repositories from nova, 
swift, glance, horizon, keystone, tempest, manuals, quantum. Please suggest if 
you wish me to add any other repository.

On Mon, Mar 5, 2012 at 4:49 PM, Jesse Andrews 
anotherje...@gmail.commailto:anotherje...@gmail.com wrote:
It would be neat to see one with all the projects together - in HD.

Since many contributors work on all the projects, we would see people
zooming all around the screen.

Is this possible?

On Mon, Mar 5, 2012 at 3:03 AM, Armaan 
dce3...@gmail.commailto:dce3...@gmail.com wrote:


  Thanks everyone, i am delighted that you liked them.
  @Jake Dahn: I used gource  http://code.google.com/p/gource/

 Best Regards
 Syed Armani




 On Mon, Mar 5, 2012 at 8:39 AM, Jake Dahn 
 j...@ansolabs.commailto:j...@ansolabs.com wrote:

 These are awesome!

 How did you make them?

 On Mar 4, 2012, at 6:51 PM, Armaan wrote:


 Hello,

 I have created few videos visualising the development history of various
 OpenStack projects. Links for the videos are given below:

 (1)Nova:   http://www.youtube.com/watch?gl=INv=l5PrjqezhgI
 (2)Swift:   http://www.youtube.com/watch?v=vFO4ibrgWTs
 (3)Horizon:  http://www.youtube.com/watch?v=5G27xsCrm7A
 (4)Keystone: http://www.youtube.com/watch?v=aRDRNuFfYGo
 (5)Glance:  http://www.youtube.com/watch?v=Gjl6izUjJs0

 Best Regards
 Syed Armani

 ___
 Mailing list: 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 Post to : 
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 Post to : 
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-poc] Meeting tomorrow

2012-03-05 Thread Thierry Carrez
Jonathan Bryce wrote:
 Anything we need to discuss? Read through the logs from last week's meeting 
 and couldn't tell if Monty still needed something from us to move forward 
 with the Satellite CI project.
 
 Also, welcome to Joe Heck, the new Keystone PTL and Brian Waldon the new 
 Glance PTL. Results from the elections are available online if you haven't 
 seen them: http://etherpad.openstack.org/xSWPqf6DWE

Should we now add DanW (as Quantum PTL) as well ?

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp


[Openstack-poc] [Bug 947718] [NEW] [DEFAULT] group name in config file must be uppercase

2012-03-05 Thread Dean Troyer
Public bug reported:

As handled by openstack.common.cfg the default group header ([DEFAULT])
is only recognized in uppercase.  [default] is not recognized as a valid
group header for global config options.

** Affects: openstack-common
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of OpenStack
Common Drivers, which is the registrant for openstack-common.
https://bugs.launchpad.net/bugs/947718

Title:
  [DEFAULT] group name in config file must be uppercase

Status in openstack-common:
  New

Bug description:
  As handled by openstack.common.cfg the default group header
  ([DEFAULT]) is only recognized in uppercase.  [default] is not
  recognized as a valid group header for global config options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-common/+bug/947718/+subscriptions

___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp