[Openstack] StackTach and Stacy github repos have moved ...

2013-03-25 Thread Sandy Walsh
Hi, how are you? Me? Right as rain, thanks for asking.

Due to a recent reorg of the Rackspace github repo, StackTach and Stacky
are now moved under the rackerlabs organization.

The new coordinates are:

https://github.com/rackerlabs/stacktach
https://github.com/rackerlabs/stacky

Please update your .git/config accordingly.

Cheers
-S

PS A bunch of other projects have moved to this new location, couldn't
hurt to have a peek and see if it affects you.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] StackTach and Stacy github repos have moved ...

2013-03-25 Thread Sandy Walsh


On 03/25/2013 10:47 AM, Jay Pipes wrote:
 Thanks for the heads up, Sandy. There was some talk in the last release
 cycle about possibly merging some or all of the StackTach functionality
 with Ceilometer. Is that still on the horizon or has that idea been
 scuttled?

Nope, we are absolutely working towards that. We just had some internal
tactical stuff we had to deal with. Getting everyone freed up now.

Cheers!
-S





 Best,
 -jay
 
 On 03/25/2013 09:42 AM, Sandy Walsh wrote:
 Hi, how are you? Me? Right as rain, thanks for asking.

 Due to a recent reorg of the Rackspace github repo, StackTach and Stacky
 are now moved under the rackerlabs organization.

 The new coordinates are:

 https://github.com/rackerlabs/stacktach
 https://github.com/rackerlabs/stacky

 Please update your .git/config accordingly.

 Cheers
 -S

 PS A bunch of other projects have moved to this new location, couldn't
 hurt to have a peek and see if it affects you.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [StackTach][Metering] Nova summary report for the PHB in your life ...

2013-02-14 Thread Sandy Walsh
Hey!

We just added a feature to StackTach for generating 24-hr summary reports.

The output can be seen here:
https://gist.github.com/SandyWalsh/4946226

It will identify requests  failures, with timing information, by major
actions (create, resize, rescue, delete, snapshot, etc) by image type.
(note: aux are all the minor actions, like list, show, etc)

https://github.com/rackspace/stacktach
(.../reports/pretty.py)

We're in the process of porting all this stuff over to ceilometer, but
that's a longer term effort.

We've also got a more detailed report with causes of errors, cell
breakdown, distros, etc. It's early and kind of thrown together, but
handy all the same.

Cheers!
-Sandy

PS PHB: http://en.wikipedia.org/wiki/Pointy-haired_Boss




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified Instrumentation, Metering, Monitoring ...

2012-11-08 Thread Sandy Walsh


From: Doug Hellmann [doug.hellm...@dreamhost.com]
Sent: Thursday, November 08, 2012 1:54 PM
To: Sandy Walsh
Cc: Eoghan Glynn; OpenStack Development Mailing List; 
openstack@lists.launchpad.net
Subject: Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified 
Instrumentation, Metering, Monitoring ...



On Wed, Nov 7, 2012 at 10:21 PM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:
Hey!

(sorry for the top-posting, crappy web client)

There is a periodic task already in the compute manager that can handle this:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3021

There seems to be some recent (to me) changes in the manager now wrt the 
resource_tracker.py and stats.py files about how this information gets relayed. 
Now it seems it only goes to the db, but previously it was sent to a fanout 
queue that the schedulers could use.

Regardless, this is done at a high enough level that it doesn't really care 
about the underlying virt layer, so long at the virt layer supports the 
get_available_resource() method.

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L152
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L392
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2209

I'd add a hook in there do what we want with this data. Write it to the db, 
send it over the wire, whatever. If there is additional information required, 
it should go in this dictionary (or we should define a format for extensions to 
it).

Yes

 It looks like that is collecting resource data, but not usage data. For 
 example, there's no disk I/O information, just disk space. Is that what you 
 mean by adding extra information to the dictionary?

 Doug

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified Instrumentation, Metering, Monitoring ...

2012-11-07 Thread Sandy Walsh
Hey!

(sorry for the top-posting, crappy web client)

There is a periodic task already in the compute manager that can handle this:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3021

There seems to be some recent (to me) changes in the manager now wrt the 
resource_tracker.py and stats.py files about how this information gets relayed. 
Now it seems it only goes to the db, but previously it was sent to a fanout 
queue that the schedulers could use. 

Regardless, this is done at a high enough level that it doesn't really care 
about the underlying virt layer, so long at the virt layer supports the 
get_available_resource() method.

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L152
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L392
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2209

I'd add a hook in there do what we want with this data. Write it to the db, 
send it over the wire, whatever. If there is additional information required, 
it should go in this dictionary (or we should define a format for extensions to 
it).

The --periodic_interval value is meant to be the fastest tick approach and 
the individual methods have to deal with how many multiples of the base tick it 
should use. So you can have different data reported at different intervals.

Now, the question of polling vs. pushing shouldn't really matter if the 
sampling rate is predetermined. We can push when the sample is taken or we can 
read from some other store from an external process ... but the sampling should 
only be done in one place, once. 

Hope I answered your question? If not, just repeat it in another way and I'll 
try again :)

-S




From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Eoghan Glynn [egl...@redhat.com]
Sent: Wednesday, November 07, 2012 4:32 PM
To: OpenStack Development Mailing List
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified 
Instrumentation, Metering, Monitoring ...

 Here's a first pass at a proposal for unifying StackTach/Ceilometer
 and other instrumentation/metering/monitoring efforts.

 It's v1, so bend, spindle, mutilate as needed ... but send feedback!

 http://wiki.openstack.org/UnifiedInstrumentationMetering

Thanks for putting this together Sandy,

We were debating on IRC (#heat) earlier the merits of moving the
ceilometer emission logic into the services, e.g. directly into the
nova-compute node. At first sight, this seemed to be what you were
getting at with the suggestion:

 Remove the Compute service that Ceilometer uses and integrate the
  existing fanout compute notifications into the data collected by the
  workers. There's no need for yet-another-worker.

While this could be feasible for measurements driven directly by
notifications, I'm struggling with the idea of moving say the libvirt
polling out of the ceilometer compute agent, as this seems to leak too
many monitoring-related concerns directly into nova (cadence of polling,
semantics of libvirt stats reported etc.).

So I just wanted to clarify whether the type of low level unification
you're proposing includes both push  pull (i.e. notification  polling)
or whether you mainly had just former in mind when it comes to ceilometer.

Cheers,
Eoghan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A point in my mind which may be already implemented

2012-11-04 Thread Sandy Walsh
Thanks for the mention ... here is some background and installation info

http://www.sandywalsh.com/2012/10/debugging-openstack-with-stacktach-and.html

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Nathanael Burton [nathanael.i.bur...@gmail.com]
Sent: Sunday, November 04, 2012 10:47 AM
To: Nah, Zhongyue
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] A point in my mind which may be already implemented


On Nov 4, 2012 9:36 AM, Nah, Zhongyue 
zhongyue@intel.commailto:zhongyue@intel.com wrote:

 I use the log files beneath /var/log/project name to do what you've 
 described manually.

 If you want a web interface, you should implement a custom notifier class(for 
 Nova) to gather the logs into a specific channel and implement your own web 
 service to display the contents from the channel.

 -zhongyue

 Sent from my iPhone

 On Nov 4, 2012, at 10:18 PM, Hao Wang 
 hao.1.w...@gmail.commailto:hao.1.w...@gmail.com wrote:

  Hi stackers,
 
  Today there is a point in my mind that is described as below. Just want to 
  know if it's already implemented in F version.
 
  While I'm using OpenStack, I would like this kind of function to know what 
  is going on at the background. For instance, if I click Launch button, I 
  am able to know what message is being sent out to message queue and which 
  component(s) the message is going, and the most important thing is where 
  the message stuck. This would give us a direct view to do troubleshooting 
  and do some specific steps later on. Thanks for your time.
 
  Regards,
  Howard
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : 
  openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : 
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

I think the notification system is what you really want.

http://wiki.openstack.org/SystemUsageData

Sandy Walsh made some really useful tools to easily get at that data and make 
sense of everything.
https://github.com/rackspace/stacktach
https://github.com/rackspace/stacky

Nate
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified Instrumentation, Metering, Monitoring ...

2012-11-02 Thread Sandy Walsh
Agreed!

Code good.

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Jeffrey Budzinski [jeffr...@yahoo-inc.com]
Sent: Friday, November 02, 2012 10:11 PM
To: Angus Salkeld
Cc: openstack-...@lists.openstack.org; openstack@lists.launchpad.net
Subject: Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified 
Instrumentation, Metering, Monitoring ...

Agreed on the point about not getting bogged down. I think Sandy is on the same 
page too from our last exchange (but he can correct me if not).

Tim here is working on some code for us to review. I'll ask him to put it in a 
repo we can all access.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Is there any documents explain nova source code in detail?

2012-11-01 Thread Sandy Walsh
I wrote a couple of articles on this:

http://www.sandywalsh.com/2012/04/openstack-nova-internals-pt1-overview.html
http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html

and, to a lesser extent:
http://www.sandywalsh.com/2012/10/debugging-openstack-with-stacktach-and.html

More to come, thinking about tackling the HTTP layer next (unless people have 
other suggestions?)

Cheers,
Sandy



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
xiaohua liu [liuxiaohu...@yahoo.com]

Sent: Thursday, November 01, 2012 4:07 AM

To: openstack@lists.launchpad.net

Subject: [Openstack] Is there any documents explain nova source code in detail?











Want to read nova code but don't know how should I begin.

Sorry for disturb.






Thanks.


lxh.





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [metering][ceilometer] Unified Instrumentation, Metering, Monitoring ...

2012-11-01 Thread Sandy Walsh
Hey!

Here's a first pass at a proposal for unifying StackTach/Ceilometer and other 
instrumentation/metering/monitoring efforts. 

It's v1, so bend, spindle, mutilate as needed ... but send feedback!

http://wiki.openstack.org/UnifiedInstrumentationMetering

Thanks,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering][ceilometer] Unified Instrumentation, Metering, Monitoring ...

2012-11-01 Thread Sandy Walsh
 Thanks for putting that together Sandy. Very nice!

Thanks!

From my perspective, there are two major things that are undesirable for us:

1) Putting this data through the queue is not something that feels right. We'd 
like to have to option to use other types of data sinks from the generation 
points. Some folks might want to use the queues, but we do not wish to burden 
the queueing system with this volume of data. In some cases, we will just drop 
these into log files for later collection and aggregation.

Makes sense. You mentioned at the summit about json structured log files, which 
would help. But additionally the existing notifier driver can easily be used to 
write to
   - a file
   - the logs (I think one exists)
   - a different rabbit queue (something we're considering)

so it need never hit the production rabbit.

We'd need a different worker mechanism for parsing the log files (keeping track 
of what's been done, etc). It might be hard to horizontally scale that though. 
Also, I'm not sure if latency of writing the log over the network or 
aggregating the local log files would be any different than writing to another 
rabbit (which is largely in memory). 

I'll update the doc to reflect this.

(I assume we're talking monitoring here, I would never put instrumentation 
stuff in the queues)

 2) We would like a common mechanism for instrumenting but we would like to be 
 able to route data to any number of places: local file, datagram endpoint, 
 etc.

Yup ... Tach has a driver based notifier mechanism. Easy to do.

Now, getting the lower level measurement library consistent is definitely the 
right approach. I still think we need to support decorators in addition to 
monkey patching. And, we should make the gauges or whatever we call them 
usable with different sinks.

Hmm, really not a fan of the decorator approach. Makes the code really ugly. 
They'd be everywhere. 

Not sure if I can get over it. :D

-S

On Nov 1, 2012, at 1:17 PM, Sandy Walsh wrote:

 Hey!

 Here's a first pass at a proposal for unifying StackTach/Ceilometer and other 
 instrumentation/metering/monitoring efforts.

 It's v1, so bend, spindle, mutilate as needed ... but send feedback!

 http://wiki.openstack.org/UnifiedInstrumentationMetering

 Thanks,
 Sandy

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Debugging OpenStack - Video on StackTach v2 and Stacky

2012-10-31 Thread Sandy Walsh
Hey!

As promised at the summit the latest changes to StackTach are up on github. 
This is a major change from the original StackTach I introduced earlier this 
year (and left to wither). Also, I'm including Stacky, which is a new command 
line tool for StackTach. 

Here's a video that explains what it's all about, how to install and use it.
http://youtu.be/pZgwDHZ3wm0

The repos:
https://github.com/rackspace/stacktach
https://github.com/rackspace/stacky

What we need:
+ packaging
+ docs

Where we're going:
+ worker and db improvement ... everything breaks at scale and stacktach is no 
exception.
+ Key Performance Indicators (kpi's). Useful for SLA's and other shirt stuff. 
There is some support in there now (via stacky), but it's busted and 
inefficient. I'm going to be getting this working again immediately. 
+ Integration with ceilometer. We're duplicating effort here, and are planning 
to fix that.

Please: install, experiment, report bugs, submit pull requests.

Look forward to your feedback!
-S


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] about nova-schedule queues

2012-10-30 Thread Sandy Walsh
As with most services there are two queues, but the scheduler has one extra:

1. The general round-robin queue. Any worker of that class can process the 
event. But, only one worker will handle the event.
2. The specific worker queue. Used when I want an event to go to a specific 
worker, for example: I want Scheduler #2 to deal with this
3. The fan-out queue. For sending atomic, non-critical information to all 
workers. The compute nodes send periodic capacity updates to all schedulers on 
this channel.

Hope it helps!
-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
韦远科 [weiyuanke...@gmail.com]
Sent: Tuesday, October 30, 2012 7:55 AM
To: openstack mail list
Subject: [Openstack] about nova-schedule queues

hi all,

I read into the source code for nova-scheduler and found there actually exists 
three msg queues, like
scheduler
scheduler.node70
scheduler_fanout_bd738fedcdf344d9bb3cb580657f54e0.

what's the functions for each queue and are there connections??


thanks,




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up

2012-10-29 Thread Sandy Walsh
Ugh, I just realized I have a conflict. Can we push it an hour later? (sorry!)

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Annie Cheng [ann...@yahoo-inc.com]
Sent: Monday, October 29, 2012 11:18 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up

Apologize .. Correction

Time: Monday (10/29/2012) 2200 UTC – 2300 UTC (3pm PDT if you are in California)
Location: IRC #openstack-meeting


From: Annie Cheng ann...@yahoo-inc.commailto:ann...@yahoo-inc.com
Date: Mon, 29 Oct 2012 07:14:08 -0700
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up

Reminder:
Time: Monday (10/29/2012) 2200 UTC – 2300 UTC (2pm PDT if you are in California)
Location: IRC #openstack-meeting

Thanks!

Annie

From: Annie Cheng ann...@yahoo-inc.commailto:ann...@yahoo-inc.com
Date: Thu, 25 Oct 2012 14:17:09 -0700
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] Instrumentation Monitoring Next Step - quick meet up

Hi all,

Couple of us chat in the summit design sessions and and after summit on 
#openstack irc regarding topic of Monitoring.  We think it's best to do a quick 
meeting to get everyone on the same page, split works, and get at least a 
prototype going in Grizzly.

Time: Monday (10/29/2012) 2200 UTC – 2300 UTC
Location: IRC #openstack-meeting
I checked http://wiki.openstack.org/Meetings, this tme slot seems to be empty

Top level agenda would be

  1.  Get everyone on the same page on high level direction
  2.  Discuss different design/implementation possibility
  3.  Split up works

Before the meeting, if you want to read up, here are some links I know.  Please 
jump in with others I missed:
Blueprint:
https://blueprints.launchpad.net/nova/+spec/nova-instrumentation-metrics-monitoring
Etherpad:
https://etherpad.openstack.org/grizzly-common-instrumentation
Different code samples:
https://github.com/asalkeld/statgen

Looking forward, some of those conversation probably will fold into the regular 
Metering meeting.  Just like to do a one off for now so we can go deeper on 
monitoring specific topics.

Thanks!

Annie
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up

2012-10-29 Thread Sandy Walsh
np ... I realize it's tricky to change this stuff.

I'll see if I can jump on remote.

-S


From: Annie Cheng [ann...@yahoo-inc.com]
Sent: Monday, October 29, 2012 1:44 PM
To: 'doug.hellm...@dreamhost.com'; Sandy Walsh
Cc: 'openstack@lists.launchpad.net'
Subject: Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up


We also have to stick with the schedule on our end.

Sandy, all discussion will be tracked in IRC log. We can also have follow up 
discussion via mailing list or more meetings.

Annie


From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
Sent: Monday, October 29, 2012 09:37 AM
To: Sandy Walsh sandy.wa...@rackspace.com
Cc: Annie Cheng; openstack@lists.launchpad.net openstack@lists.launchpad.net
Subject: Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up

I can't meet later than that tonight.

Doug

On Mon, Oct 29, 2012 at 10:21 AM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:
Ugh, I just realized I have a conflict. Can we push it an hour later? (sorry!)

-S


From: 
openstack-bounces+sandy.walsh=rackspace@lists.launchpad.netmailto:rackspace@lists.launchpad.net
 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.netmailto:rackspace@lists.launchpad.net]
 on behalf of Annie Cheng [ann...@yahoo-inc.commailto:ann...@yahoo-inc.com]
Sent: Monday, October 29, 2012 11:18 AM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

Subject: Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up

Apologize .. Correction

Time: Monday (10/29/2012) 2200 UTC – 2300 UTC (3pm PDT if you are in California)
Location: IRC #openstack-meeting


From: Annie Cheng ann...@yahoo-inc.commailto:ann...@yahoo-inc.com
Date: Mon, 29 Oct 2012 07:14:08 -0700
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Instrumentation Monitoring Next Step - quick meet up

Reminder:
Time: Monday (10/29/2012) 2200 UTC – 2300 UTC (2pm PDT if you are in California)
Location: IRC #openstack-meeting

Thanks!

Annie

From: Annie Cheng ann...@yahoo-inc.commailto:ann...@yahoo-inc.com
Date: Thu, 25 Oct 2012 14:17:09 -0700
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] Instrumentation Monitoring Next Step - quick meet up

Hi all,

Couple of us chat in the summit design sessions and and after summit on 
#openstack irc regarding topic of Monitoring.  We think it's best to do a quick 
meeting to get everyone on the same page, split works, and get at least a 
prototype going in Grizzly.

Time: Monday (10/29/2012) 2200 UTC – 2300 UTC
Location: IRC #openstack-meeting
I checked http://wiki.openstack.org/Meetings, this tme slot seems to be empty

Top level agenda would be

  1.  Get everyone on the same page on high level direction
  2.  Discuss different design/implementation possibility
  3.  Split up works

Before the meeting, if you want to read up, here are some links I know.  Please 
jump in with others I missed:
Blueprint:
https://blueprints.launchpad.net/nova/+spec/nova-instrumentation-metrics-monitoring
Etherpad:
https://etherpad.openstack.org/grizzly-common-instrumentation
Different code samples:
https://github.com/asalkeld/statgen

Looking forward, some of those conversation probably will fold into the regular 
Metering meeting.  Just like to do a one off for now so we can go deeper on 
monitoring specific topics.

Thanks!

Annie

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer, StackTach, Tach / Scrutinize, CloudWatch integration ... Summit followup

2012-10-25 Thread Sandy Walsh
grizzly-common-instrumentation seems to be the best choice ... hopefully the 
other groups will use this etherpad too. 

We need a proper blueprint to nail down the approach. IRC is great, but doesn't 
retain history for other groups. I think we need to get a plan for translating 
the etherpad into something concise and nailed down.

statgen should really just be a new notifier in Tach (or Scrutinize) ... vs 
copy-pasting the code into yet-another repo.  Hopefully that's the plan? Tach 
should remain a generic tool and not pegged to OpenStack. 

-S

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Angus Salkeld [asalk...@redhat.com]
Sent: Thursday, October 25, 2012 1:00 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Ceilometer, StackTach, Tach / Scrutinize, CloudWatch 
integration ... Summit followup

On 24/10/12 23:35 +, Sandy Walsh wrote:
Hey y'all,

Great to chat during the summit last week, but it's been a crazy few days of 
catch-up since then.

The main takeaway for me was the urgent need to get some common libraries 
under these efforts.

Yip.


So, to that end ...

1. To those that asked, I'm going to get my slides / video presentation made 
available via the list. Stay tuned.

2. I'm having a hard time following all the links to various efforts going on 
(seems every time I turn around there's a new metric/instrumentation effort, 
which is good I guess :)

Here is some fun I have been having with a bit of tach+ceilometer code.
https://github.com/asalkeld/statgen


Is there a single location I can place my feedback? If not, should we create 
one? I've got lots of suggestions/ideas and would hate to have to duplicate 
the threads or leave other groups out.

I'll add some links here that I am aware of:
https://bugs.launchpad.net/ceilometer/+bug/1071061
https://etherpad.openstack.org/grizzly-common-instrumentation
https://etherpad.openstack.org/grizzly-ceilometer-actions
https://blueprints.launchpad.net/nova/+spec/nova-instrumentation-metrics-monitoring



3. I'm wrapping up the packaging / cleanup of StackTach v2 with Stacky and 
hope to make a more formal announcement on this by the end of the week. Lots 
of great changes to make it easier to use/deploy based on the Summit feedback!

Unifying the stacktach worker (consumer of events) into ceilometer should be a 
first step to integration (or agree upon a common YAGI-based consumer?)

4. If you're looking at Tach, you should also consider looking at Scrutinize 
(my replacement effort) https://github.com/SandyWalsh/scrutinize (needs 
packaging/docs and some notifier tweaks on the cprofiler to be called done 
for now)

Looks great! I like the monkey patching for performance as you have
done here, but we also need a nice clean way of manually inserting 
instrumentation
too (that is what I have been experimenting with in statgen).

Can we chat in #openstack-metering so we are a bit more aware what we are all 
up to?


-Angus


Looking forward to moving ahead on this ...

Cheers,
-S





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceilometer, StackTach, Tach / Scrutinize, CloudWatch integration ... Summit followup

2012-10-24 Thread Sandy Walsh
Hey y'all,

Great to chat during the summit last week, but it's been a crazy few days of 
catch-up since then. 

The main takeaway for me was the urgent need to get some common libraries under 
these efforts. 

So, to that end ...

1. To those that asked, I'm going to get my slides / video presentation made 
available via the list. Stay tuned.

2. I'm having a hard time following all the links to various efforts going on 
(seems every time I turn around there's a new metric/instrumentation effort, 
which is good I guess :) 

Is there a single location I can place my feedback? If not, should we create 
one? I've got lots of suggestions/ideas and would hate to have to duplicate the 
threads or leave other groups out. 

3. I'm wrapping up the packaging / cleanup of StackTach v2 with Stacky and hope 
to make a more formal announcement on this by the end of the week. Lots of 
great changes to make it easier to use/deploy based on the Summit feedback!

Unifying the stacktach worker (consumer of events) into ceilometer should be a 
first step to integration (or agree upon a common YAGI-based consumer?) 

4. If you're looking at Tach, you should also consider looking at Scrutinize 
(my replacement effort) https://github.com/SandyWalsh/scrutinize (needs 
packaging/docs and some notifier tweaks on the cprofiler to be called done for 
now)

Looking forward to moving ahead on this ...

Cheers,
-S





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] (no subject)

2012-10-16 Thread Sandy Walsh
+1


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Joe Savak [joe.sa...@rackspace.com]
Sent: Tuesday, October 16, 2012 5:50 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] (no subject)







 





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] StackTach unconference talk

2012-10-15 Thread Sandy Walsh
... is Wednesday 11:50 in 'Maggie' room.

If you're interested in debugging tools and performance monitoring ...  see you 
there!

-Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Versioning for notification messages

2012-10-10 Thread Sandy Walsh
Hey Phil!

The notifications have been quite stable for some time now. I'm adding some 
basic protocol tweaks (supporting the compute.instance.update messages and 
fixing some confusion in the instance_uuid naming), but also making larger 
changes:

1. StackTach can now be used across cells in a single large deployment (very 
cool to watch)
2. Matt Sherbourne added some great performance/stability improvements to the 
worker
3. A new cmdline tool stacky that is more approachable to operators/admins 
(allows sed/grep/less/tail support)
4. New Timing tables for pre-computing important operations like 
compute.run_instance
5. A new KPI option that computes operation time all the way from API request 
to Service fulfillment (slick)
6. Integration with error notifications (api 500's and service errors) so we 
can easily spot failed operations. (I recently added a middleware hook to 
notify on api faults)

It's turning into a pretty powerful debugging tool, but I can see it evolving 
into something larger. 

I'll be doing a lightning talk on all the changes at the summit.

-S


From: Day, Phil [philip@hp.com]
Sent: Wednesday, October 10, 2012 4:47 AM
To: Sandy Walsh
Subject: RE: Versioning for notification messages

Hi Sandy,

Are you making changes to StackTach to keep up with the notification system, or 
changes to the notification system to make StackTach even better ?

Cheers,
Phil

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
Sent: 09 October 2012 18:37
To: Day, Phil; openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: RE: Versioning for notification messages

While I think the idea has merit, it's going to be a tough thing to version. 
(I'm neck deep in the notifications right now making a major revision to 
StackTach)

We'd have to version the context, the instance state dictionary, the cpu info 
and the related payloads for each type of message since each component comes 
from different, already-existing, systems.  (as opposed to the notification 
being responsible for the entire payload itself)

Lots of ramifications.

-S



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Day, Phil [philip@hp.com]

Sent: Tuesday, October 09, 2012 2:07 PM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: [Openstack] Versioning for notification messages

Hi Folks,

What do people think about adding a version number to the notification systems, 
so that consumers of notification messages are protected to some extent from 
changes in the message contents ?

For example, would it be enough to add a version number to the messages - or 
should we have the version number as part of the topic itself (so that the 
notification system can provide both a 1.0 and 1.1 feed), etc ?

Phil





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bottleneck of message queue

2012-10-10 Thread Sandy Walsh
Hey Howard,

Queues are generally in memory, but you may turn on persistent (disk) queues in 
your environment. So that's your limitation. Having rabbitmq on a different 
server is a good idea. 

Also, Queues are only used for control, not user data, so they shouldn't be 
that big of a burden. Having a queue-based architecture adds some complexity 
for synchronization, but their benefit of giving us burst-handling capabilities 
far outweigh that (imho).

If your queues are filling up, you may:
1. need beefier machines processing the offending queues (or rabbit server)
2. need to add more worker nodes (more network, more scheduler, though more 
compute isn't appropriate)
3. think about clustering rabbit

Notifications are perhaps the chattiest queues in the system, so make sure you 
have suitable workers there (if you have notifications turned on)

This might help you understand the flow through the queues a little more?
http://www.sandywalsh.com/2012/04/openstack-nova-internals-pt1-overview.html
http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html

Cheers,
Sandy


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Hao Wang [hao.1.w...@gmail.com]
Sent: Tuesday, October 09, 2012 11:49 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] Bottleneck of message queue


Hi guys,

I am trying to figure out how the internal interaction processes within 
different modules of OpenStack. Frankly speaking, while I'm reading the source 
codes I lost myself and have to jump out again to look at OpenStack from out of 
the box. I don't know if anybody has the similiar feeling with me. Is there any 
picture I can follow to see the message flows?

OpenStack is based on message queue to ensure the expansion easy. Here come my 
questions. Does anybody know the capacity of message queue? Would the capacity 
be a bottleneck of the platform?

Thanks,
Howard




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Versioning for notification messages

2012-10-09 Thread Sandy Walsh
While I think the idea has merit, it's going to be a tough thing to version. 
(I'm neck deep in the notifications right now making a major revision to 
StackTach)

We'd have to version the context, the instance state dictionary, the cpu info 
and the related payloads for each type of message since each component comes 
from different, already-existing, systems.  (as opposed to the notification 
being responsible for the entire payload itself)

Lots of ramifications.

-S



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Day, Phil [philip@hp.com]

Sent: Tuesday, October 09, 2012 2:07 PM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: [Openstack] Versioning for notification messages

Hi Folks,
 
What do people think about adding a version number to the notification systems, 
so that consumers of notification messages are protected to some extent from 
changes in the message contents ?
 
For example, would it be enough to add a version number to the messages – or 
should we have the version number as part of the topic itself (so that the 
notification system can provide both a 1.0 and 1.1 feed), etc ?
 
Phil





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Blog post: OpenStack Nova Internals - Pt2 - Services

2012-09-27 Thread Sandy Walsh
Went a little crazy on this one ... please let me know about the many blatant 
mistakes so I can edit quickly :)

http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html

Thanks!
-S
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Blueprint Proposal: Inflight Monitoring Service ...

2012-08-28 Thread Sandy Walsh
Not sure if an email like this gets sent automagically by LP anymore, so here 
goes ...

I'd love to get some feedback on it. 

Thanks!
-S

The Blueprint
https://blueprints.launchpad.net/nova/+spec/monitoring-service

The Spec
http://wiki.openstack.org/PerformanceMonitoring

The Branch:
https://review.openstack.org/#/c/11179/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Summit Tracks Topics

2012-08-14 Thread Sandy Walsh
Perhaps off topic, but ...

One of the things I've noticed at the last couple of summits are the number of 
new attendees that could really use an OpenStack 101 session. Many of them are 
on fact-finding missions and their understanding of the architecture is 
10,000'+. 

Usually when conf's get to this size there's a day beforehand for 
workshops/tutorials/getting-started stuff. I'm sure it's too late for this 
coming summit, but perhaps something to consider for later ones?

Hands-on, code-level, devstack, configuration, debug. I'd be happy to help out 
with this.

Thoughts?
-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, August 14, 2012 12:19 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Summit Tracks  Topics

Lauren Sell wrote:
 Speaking submissions for the conference-style content are
 live http://www.openstack.org/summit/san-diego-2012/call-for-speakers/ 
 (basically
 everything except the Design Summit working sessions which will open for
 submissions in the next few weeks), and the deadline is August 30.

A bit of explanation on the contents for the Design Summit track:

The Design Summit track is for developers and contributors to the next
release cycle of OpenStack (codenamed Grizzly). Each session is an
open discussion on a given technical theme or specific feature
to-be-developed in one of the OpenStack core projects.

Compared to previous editions, we'll run parallel to rest of the
OpenStack Summit and just be one of the tracks for the general event.
We'll run over 4 days, but there will be no session scheduled during the
general session of the OpenStack Summit (first hours in the morning on
Tuesday/Wednesday). Finally, all sessions will be 40-min long, to align
with the rest of the event.

Within the Design Summit we also used to have classic presentations
around Devops, ecosystem and related projects: those will now have their
own tracks in the OpenStack Summit (Operations Summit, Related OSS
Projects, Ecosystem, Security...), so they are no longer a subpart
of the Design Summit track. The Design Summit will be entirely
focused on the Grizzly cycle of official OpenStack projects, and
entirely made of open discussions. We'll also have some breakout rooms
available for extra workgroups and incubated projects.

The sessions within the design summit are now organized around Topics.
The topics for the Design Summit are the core projects,
openstack-common, Documentation and a common Process track to cover
the release cycle and infrastructure. Each topic content is coordinated
by the corresponding team lead(s).

Since most developers are focused on Folsom right now, we traditionally
open our call for sessions a bit later (should be opened first week of
September). Contributors will be invited to suggest a topic for design
summit sessions. After the Folsom release, each topic lead will review
the suggestions, merge some of them and come up with an agenda for
his/her topic. You can already see the proposed topic layout on the
Design Summit topics tab in the document linked in Lauren's email.

Comments/Feedback welcome !

--
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Removing support for KVM Hypervisor ...

2012-08-10 Thread Sandy Walsh

Sorry George, couldn't resist. :)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Removing support for KVM Hypervisor ...

2012-08-10 Thread Sandy Walsh
haha ... oh no.

It's joke. Long story.

Not going to happen :)

From: chaohua wang [chwang...@gmail.com]
Sent: Friday, August 10, 2012 5:31 PM
To: Sandy Walsh
Subject: Re: [Openstack] Removing support for KVM Hypervisor ...

Hi I posted my question to  
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
but i can't see it, Could you tell me reason why?

thank you,

Chaohua

On Fri, Aug 10, 2012 at 2:23 PM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:

Sorry George, couldn't resist. :)
___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cannot pass hint to Nova Scheduler

2012-08-07 Thread Sandy Walsh
Hi Heng,

I think Joseph has the best suggestion for tracking the HostState data 
currently. Either that or use nova-manage to open a python shell and query the 
database yourself. 

Another possibility (more more work) would be to create an API extension that 
could query the scheduler for this information. A plug-in is recommended since 
it wouldn't have to be deployed to production. 

But, in the short term, I would just add some debug statements in there to see 
what it actually has access to. Sorry I can't be more specific, but I haven't 
looked at the scheduler code in a long time and I'm not really sure how it has 
changed. It's nice to see that your looking into the json filter though. If 
you're still having problems let me know and I'll dig deeper.

Cheers,
Sandy


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Joseph Suh [j...@isi.edu]
Sent: Tuesday, August 07, 2012 8:01 AM
To: Heng Xu
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Cannot pass hint to Nova Scheduler

Heng,

You can print the values in the HostState class. If you want to monitor the 
changes of the values, for example, you can print (either directly or using 
LOG.debug()), the values in the code where you want monitor, for example, 
consume_from_instance().

Thanks,

Joseph


(w) 703-248-6160
(c) 571-340-2434
(f) 703-812-3712
http://www.east.isi.edu/~jsuh

Information Sciences Institute
University of Southern California
3811 N. Fairfax Drive Suite 200
Arlington, VA, 22203, USA


- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: Heng Xu shouhengzhang...@mail.utoronto.ca
Cc: openstack@lists.launchpad.net
Sent: Monday, August 6, 2012 12:11:58 PM
Subject: Re: [Openstack] Cannot pass hint to Nova Scheduler

On 08/04/2012 07:48 AM, Heng Xu wrote:
 But I tried a few things in HostState class, I ran into error, because I 
 could not monitor the stats in hoststates class as opposed to a database, is 
 there a way to check the stats in HostState class as exists in memory?

cc'ing Sandy Walsh, who is vastly more familiar with the scheduler than
I am :) Sandy, see Heng's question above... seems like a great question
to me -- and also a possible mini-project for someone to work on that
would add a scheduler-diagnostics extension if such functionality isn't
readily available.

Best,
-jay

 Heng
 
 From: Jay Pipes [jaypi...@gmail.com]
 Sent: Friday, August 03, 2012 4:38 PM
 To: Heng Xu
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Cannot pass hint to Nova Scheduler

 On 08/03/2012 09:28 AM, Heng Xu wrote:
 Another questions is, I can get all the status of a computing node in the 
 mysql nova database, and select * from compute_node, but now I am using json 
 filter, the only field I have success with now is the free_ram_db, if my 
 hint uses free_disk_gb, then I always get error, but the database is showing 
 my compute node has $free_disk_gb equal 17, so I was wondering, where to 
 find exactly what kind of json field can use in json filter, thanks in 
 advance

 The nova.scheduler.host_manager.HostState class is what is checked for
 attributes, not the ComputeNode model. So, you need to use
 $free_disk_mb, not free_disk_gb.

 Best,
 -jay



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keyring support in openstack

2012-07-31 Thread Sandy Walsh
I added similar functionality to novaclient. There is a --nocache option to 
ignore the cache and the 'import keyring' check is just to keep it from 
crashing on non-supported systems. 

I key off the following: auth url + username + service name + region to prevent 
conflicts with other users.

Hope it helps!

-S



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Bhuvaneswaran A [bhu...@apache.org]
Sent: Monday, July 30, 2012 5:50 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Keyring support in openstack

On Mon, Jul 30, 2012 at 6:31 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:

 You've already answered several of my questions on the ticket, but I still
 have some usability concerns.

 How does the keyring system support a single person logging in using
 multiple user accounts? For example, if I have an admin account and a
 regular user, how do I switch between them based on the operations I need
 to perform?

The password is stored in keyring, for a given user. It also support
multiple users. The password is stored against the user specified in
command line, --os-username or environment variable OS_USERNAME.

The sample content of the keyring file ~/.openstack-keyring.cfg is as follows:
[openstack]
bhuvan = dG4wN2FjxA==
test = xYwN2FjxA==

 Is there a way to disable the behavior of having a password saved to a
 keyring for a particular user, without uninstalling the python-keyring
 package (and therefore disabling keyring support for all users)?

The simplest alternative is to specify password using other mechanism,
in command line or environment variable. It's not possible to prevent
using keyring, if password is not specified in any of these 2
mechanisms. The purpose of this patch is, to prevent password prompt.

 The wiki mentions the password being saved using
 keyring.backend.UncryptedFileKeyring. Does that mean the password is saved
 in cleartext? Is the file protected in some way besides filesystem
 permissions?

As mentioned in wiki page, the password is stored in base64 format.

 The mention of one backend implies that there are others. Should we give
 users a way to choose the backend, in case they have a preference?

python-keyring also support several other backends:
  1.CryptedFileKeyring
  2. GnomeKeyring
  3. KDEKWallet
  4. OSXKeychain
  5. Win32CryptoKeyring
  6. ... and more.

The behaviour of these backends vary for each desktop. For instance,
GnomeKeyring may prompt for keyring password, once per login session.
CryptedFileKeyring may prompt for keyring password, every time. It's
as good as not using keyring.

 How does the use of the keyring affect scripting using the command line
 tool? Can a script access the keyring, or does it need to use the other
 options?

Yes. The script could be managed with any python script, using the
same methods exposed in keyring python module.
  -- get_password() -- to get the password for given user.
  -- set_password() -- to set the password in keyring.

 In one review comment you mention a few desktop apps that know how to
 manipulate the keyring to manage its contents. What about remote access via
 ssh, where a desktop environment is not available? Does the keyring library
 include tools for manipulating the file, or do we need to build our own? If
 so, what tools would be needed?

This was applicable for older patch, wherein we rely on
desktop/environment specific backend. With older patch, if GNOME
desktop is used, GnomeKeyring backend is used; if no desktop is used,
CryptedFileKeyring backend is used. With new patch, irrespective of
whether desktop is enabled, UncryptedFileKeyring backend is used. With
this patch, the keyring behaviour is uniform across all systems in
which we deploy openstack.

In summary, the primary goal of this patch is to reuse the password
entered in the prompt once, and prevent the user from entering the
password again. Ultimately, the password is not exposed in environment
or command line (ps). It also facilitate the automated script wherein
the openstack client might be used. In such case, the password is
not read from prompt, but from keyring.
--
Regards,
Bhuvaneswaran A
www.livecipher.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal to add Yun Mao to nova-core

2012-07-18 Thread Sandy Walsh
Ab-so-lutely! +1



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Vishvananda Ishaya [vishvana...@gmail.com]
Sent: Wednesday, July 18, 2012 8:10 PM
To: Openstack (openstack@lists.launchpad.net) (openstack@lists.launchpad.net)
Subject: [Openstack] [nova] Proposal to add Yun Mao to nova-core

Hello Everyone!

Yun has been putting a lot of effort into cleaning up our state management, and 
has been contributing a lot to reviews[1]. I think he would make a great 
addition to nova-core.


[1] https://review.openstack.org/#/dashboard/1711


Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Happy Birthday OpenStack!

2012-07-17 Thread Sandy Walsh
US Only ... booo!



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
John Purrier [j...@openstack.org]
Sent: Tuesday, July 17, 2012 11:34 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Happy Birthday OpenStack!

Take a little break today and celebrate the OpenStack project's second 
birthday! AppFog and Rackspace have teamed up to present a fun birthday 
programming contest where you can win the presents :) Go to 
http://openstack.appfog.com for details.

Full disclosure, I am part of the AppFog team and also one of the contest 
judges.

John

John Purrier
j...@openstack.com
(206) 930-0788
http://www.linkedin.com/in/johnpur
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Performance metrics

2012-06-20 Thread Sandy Walsh
Hi

That's been my focus for a while now. We're using Tach to instrument openstack, 
quantum, glance and a bunch of other components.

https://github.com/ohthree/tach

Also, there's StackTach which will consume the notifications and give you a 
real-time display of what's happening in the system (as well as giving you a db 
of events you can query vs. parsing logfiles).

https://github.com/rackspace/stacktach

And, I recently submitted a patch to novaclient which adds a --timings option 
to see how long each API request takes.

(and this review https://review.openstack.org/#/c/8672/ will add novaclient 
token caching for a slight speedup)

Hope it helps!
-Sandy



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Neelakantam Gaddam [neelugad...@gmail.com]
Sent: Wednesday, June 20, 2012 9:56 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Performance metrics

Hi All,

I want to do performance analysis on top of [openstack,Qauntum,openvswitch] 
setup. I am interested in the following metrics.

VM life cycle (creation, deletion, boot..,etc)
VM Migration
Quantum (network, port creation/deletion..,etc)

Are there any performance metric tools/scripts available in openstack ?
If not, how can I do the performance analysis of the above metrics on openstack 
quantum setup ? Please help me regarding performance metrics.

I want to know details of the biggest deployment with 
[openstack,Qauntum,openvswitch] setup interms of number of tenant networks, 
number of compute nodes, number of VMs per tenant.


Thanks in advance.

--
Thanks  Regards
Neelakantam Gaddam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Where to add the performance increasing method in openstack

2012-05-24 Thread Sandy Walsh
Hi!

You would want to report this information to the Scheduler (probably via the 
db) so it can make more informed decisions. A new Weight Function in the 
scheduler would be the place to add it specifically.

We currently track the number of VM I/O operations being performed on each 
Compute node for this (build, resize, migration, etc) but nothing within the 
guest itself. Could be handy for certain verticals.

Hope it helps,
Sandy


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
sarath zacharia [sarathzacha...@gmail.com]
Sent: Thursday, May 24, 2012 1:48 AM
To: Nagaraju Bingi
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Where to add the performance increasing method in 
openstack

Hi
  Yes , We are monitoring the loads on the instances in each node and 
next instances will create in most suitable node ie,
 node have less load

Regards ,

Sarath Zacharia


On Wed, May 23, 2012 at 6:51 PM, Nagaraju Bingi 
nagaraju_bi...@persistent.co.inmailto:nagaraju_bi...@persistent.co.in wrote:
Hi,

Do you mean that your program would monitor the Openstack Instances and check 
the performance and load of nodes according to the number of different 
application running in each instance?
Your program will tell you that which Instance is the best suitable to deploy 
your new application.

Please let me know if I understood correctly.

Regards,
Nagaraju B


From: 
openstack-bounces+nagaraju_bingi=persistent.co...@lists.launchpad.netmailto:persistent.co...@lists.launchpad.net
 
[mailto:openstack-bounces+nagaraju_bingimailto:openstack-bounces%2Bnagaraju_bingi=persistent.co...@lists.launchpad.netmailto:persistent.co...@lists.launchpad.net]
 On Behalf Of sarath zacharia
Sent: Wednesday, May 23, 2012 10:57 AM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] Where to add the performance increasing method in openstack


Hi ,
  We created an performance increasing programe for cloud application.
This programe will check the performance and load of nodes according
to the number of different application running in each node of our cloud 
environment and
it will give which node is more suitable for deploying the next application in 
cloud.
In which service we have to modify in openstack for implementing this 
algoritham ?

note : now our code is in java how we can connect to the nova development 
environment

Yours Sincerly

Sarath Zacharia



DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.



--
with regards

Sarath !
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Where to add the performance increasing method in openstack

2012-05-24 Thread Sandy Walsh
Have a look at the nova/notifications/capacity_notifier and the related 
capacity operations in nova.db.api

The notifier listens for Compute node operations (*.start/*.end) and keeps the 
ComputeNode table fresh with the latest state. The Scheduler uses the 
ComputeNode table for picking candidates.

Cheers,
Sandy




From: Diego Parrilla [diego.parrilla.santama...@gmail.com]

Sent: Thursday, May 24, 2012 2:01 PM

To: Sandy Walsh
Cc: sarath zacharia; Nagaraju Bingi; openstack@lists.launchpad.net
Subject: Re: [Openstack] Where to add the performance increasing method in 
openstack

Hi Sandy,

Where is your VM I/O based scheduler published? We are also working on smart 
schedulers.

Cheers
Diego

Enviado desde mi iPhone, perdona la brevedad

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova state machine simplification and clarification

2012-05-23 Thread Sandy Walsh
Hi Yun,

I like the direction you're going with this. Unifying these three enums would 
be a great change. Honestly it's really just combing two enums (vm  task) and 
using power state as a tool for reconciliation (actual != reality).

Might I suggest using graphvis instead of a spreadsheet? That way we can keep 
it under version control, have funky pictures and there are libraries for 
parsing .dot files in Python. Also we can use the graphvis doc to actually 
drive the state machine (via attributes on nodes/edges)

I'd like to see more discussion on how reconciliation will be handled in the 
event of a conflict.

Cheers!
-S
 

From:  Yun Mao [yun...@gmail.com]
Sent: Thursday, May 17, 2012 10:16 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] nova state machine simplification and clarification

...
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova idear, thoughts wanted.

2012-05-03 Thread Sandy Walsh
Agreed. That's largely the effort of the Orchestration group.


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Joshua Harlow [harlo...@yahoo-inc.com]
Sent: Thursday, May 03, 2012 12:35 AM
To: openstack
Subject: [Openstack] Nova idear, thoughts wanted.

Hi all,

I was thinking today about how nova-compute could become more pluggable.
I was wondering if there had been any thought into how say each method, say in 
the compute-manager could almost become a set of stages in a pipeline.
For example the run instance method is really doing the following steps:

run_instance:
   steps:
  - check_instance_not_already_created
  - check_image_size
  - notify_about_instance_usage
  - instance_update(BUILDING)
  - allocate_network
  - prep_block_device
  - spawn:
 - instance_update(BUILD)
 - driver_spawn
 - instance_update(ACTIVE)
   on_failure:
   - deallocate_network

This reminds me slightly of what devstackpy (to be renamed soon) does but 
instead of via code, actions are partially defined via config/persona/ Now 
say if the above steps are plugins (similar to say a paste pipeline) then it 
becomes easy for company Y to add special sauce Z before or after stage W. I 
was just wondering what people thought about this. It sort of makes nova more 
of a orchestrator that loads plugins that perform various pipelines, where in 
nova’s case those pipelines are VM related.

Comments welcome. This might already have been thought of, but if so, that’s ok 
also :-)

-Josh
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Compute State Machine diagram ... (orchestration? docs?)

2012-05-03 Thread Sandy Walsh
Even better, here's the Open/LibreOffice Impress original. Have at it!

http://dl.dropbox.com/u/166877/PowerStates.odp

(Added a walk-thru of run_instance() as well)

Cheers,
Sandy


From: Lorin Hochstein [lo...@nimbisservices.com]
Sent: Thursday, May 03, 2012 1:08 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Compute State Machine diagram ... (orchestration? 
docs?)

Hi Sandy:




On May 2, 2012, at 12:10 PM, Sandy Walsh wrote:

Here's a little diagram I did up this morning for the required vm_state / 
task_state transitions for compute api operations.

http://dl.dropbox.com/u/166877/PowerStates.pdf

Might be useful to the orchestration effort (or debugging in general)


Nice!

I'd like to add those diagrams to the Nova developer documentation that lives 
at nova.openstack.orghttp://nova.openstack.org. Can you export them as two 
png files?


Take care,

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.comhttps://www.nimbisservices.com/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Compute State Machine diagram ... (orchestration? docs?)

2012-05-02 Thread Sandy Walsh
Here's a little diagram I did up this morning for the required vm_state / 
task_state transitions for compute api operations. 

http://dl.dropbox.com/u/166877/PowerStates.pdf

Might be useful to the orchestration effort (or debugging in general)

Cheers,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on notifications

2012-04-26 Thread Sandy Walsh
Yes, correct, I thought you wanted the info as soon as the scheduler decided on 
a host. create.end will only fire when the instance has been created. 

And you're correct about the scheduler, but all schedulers will likely be a 
derivation of FilterScheduler or simply have custom filters/weights. Simple and 
Change will turn to filters/weights soon. Depends on your installation. 

-Sandy





From: Joshua Harlow [harlo...@yahoo-inc.com]
Sent: Thursday, April 26, 2012 5:07 PM
To: Sandy Walsh; openstack
Subject: Re: [Openstack] Question on notifications

Thx.

With these messages, instead of the “compute.instance.create.end” it can’t be 
guaranteed that the instance actually got created right?

If I listen for the “compute.instance.create.end” and use the hostname (which 
is part of the publisher id) then I can know that it actually got created?

Is the “weighted_host” also dependent on which type of scheduler is used? (I 
would assume that not all schedulers do weighting?)

On 4/25/12 5:29 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:




You want these events:



scheduler.run_instance.start (generated when scheduling begins)

scheduler.run_instance.scheduled (when a host is selected. one per instance)

scheduler.run_instance.end (all instances placed)



The .scheduled event will have the target hostname in it in the

weighted_host key ...



For example ...



[u'monitor.info',

 {u'_context_auth_token': None,

  u'_context_is_admin': True,

  u'_context_project_id': None,

  u'_context_quota_class': None,

  u'_context_read_deleted': u'no',

  u'_context_remote_address': None,

  u'_context_request_id': u'req-...ac',

  u'_context_roles': [u'admin', u'identity:admin'],

  u'_context_timestamp': u'2012-04-25T20:32:44.506538',

  u'_context_user_id': None,

  u'event_type': u'scheduler.run_instance.scheduled',

  u'message_id': u'2df8...fc',

  u'payload': {u'instance_id': u'7c21...960',

 u'request_spec': {u'block_device_mapping': [],

   u'image': {u'checksum': u'ee0e...cfcc',

  u'container_format': u'ovf',

  u'created_at': u'2012-02-29 23:12:16',

  u'deleted': False,

  u'deleted_at': None,

  u'disk_format': u'vhd',

  u'id': u'079...b5fb',

  u'is_public': True,

  u'min_disk': u'10',

  u'min_ram': u'256',

  u'name': u'CentOS 6.0',

  u'properties': {u'arch': u'x86-64',

 u'auto_disk_config': u'True',

 u'os_distro': u'centos',

 u'os_type': u'linux',

 u'os_version': u'6.0',

 u'rax_managed': u'false',

 u'rax_options': u'0'},

 u'size': 390243020,

 u'status': u'active',

 u'updated_at': u'2012-02-29 23:12:32'},

  u'instance_properties': {u'access_ip_v4': None,

   u'access_ip_v6': None,

   u'architecture': u'x86-64',

   u'auto_disk_config': True,

   u'availability_zone': None,

   u'config_drive': u'',

  .u'config_drive_id': u'',

  u'display_description': u'testserver...9870',

  u'display_name': u'testserver...9870',

  u'ephemeral_gb': 0,

  u'image_ref': u'0790...b5fb',

  u'instance_type_id': 1,

  u'kernel_id': u'',

  u'key_data': None,

  u'key_name': None,

  u'launch_index': 0,

  u'launch_time': u'2012-04-25T20:32:10Z',

  u'locked': False,

  u'memory_mb': 256,

  u'metadata': {},

  u'os_type': u'linux',

  u'power_state': 0,

  u'progress': 0,

  u'project_id': u'5820792',

  u'ramdisk_id': u'',

  u'reservation_id': u'r-j...mm',

  u'root_device_name': None,

  u'root_gb': 10,

  u'user_data': u'',

  u'user_id': u'162201',

  u'uuid': u'7c210...ed8960',

  u'vcpus': 4

Re: [Openstack] Question on notifications

2012-04-25 Thread Sandy Walsh
You want these events:

scheduler.run_instance.start (generated when scheduling begins)
scheduler.run_instance.scheduled (when a host is selected. one per instance)
scheduler.run_instance.end (all instances placed)

The .scheduled event will have the target hostname in it in the
weighted_host key ...

For example ...

[u'monitor.info',
 {u'_context_auth_token': None,
  u'_context_is_admin': True,
  u'_context_project_id': None,
  u'_context_quota_class': None,
  u'_context_read_deleted': u'no',
  u'_context_remote_address': None,
  u'_context_request_id': u'req-...ac',
  u'_context_roles': [u'admin', u'identity:admin'],
  u'_context_timestamp': u'2012-04-25T20:32:44.506538',
  u'_context_user_id': None,
  u'event_type': u'scheduler.run_instance.scheduled',
  u'message_id': u'2df8...fc',
  u'payload': {u'instance_id': u'7c21...960',
 u'request_spec': {u'block_device_mapping': [],
   u'image': {u'checksum': u'ee0e...cfcc',
  u'container_format': u'ovf',
  u'created_at': u'2012-02-29 23:12:16',
  u'deleted': False,
  u'deleted_at': None,
  u'disk_format': u'vhd',
  u'id': u'079...b5fb',
  u'is_public': True,
  u'min_disk': u'10',
  u'min_ram': u'256',
  u'name': u'CentOS 6.0',
  u'properties': {u'arch': u'x86-64',
 u'auto_disk_config': u'True',
 u'os_distro': u'centos',
 u'os_type': u'linux',
 u'os_version': u'6.0',
 u'rax_managed': u'false',
 u'rax_options': u'0'},
 u'size': 390243020,
 u'status': u'active',
 u'updated_at': u'2012-02-29 23:12:32'},
  u'instance_properties': {u'access_ip_v4': None,
   u'access_ip_v6': None,
   u'architecture': u'x86-64',
   u'auto_disk_config': True,
   u'availability_zone': None,
   u'config_drive': u'',
  .u'config_drive_id': u'',
  u'display_description': u'testserver...9870',
  u'display_name': u'testserver...9870',
  u'ephemeral_gb': 0,
  u'image_ref': u'0790...b5fb',
  u'instance_type_id': 1,
  u'kernel_id': u'',
  u'key_data': None,
  u'key_name': None,
  u'launch_index': 0,
  u'launch_time': u'2012-04-25T20:32:10Z',
  u'locked': False,
  u'memory_mb': 256,
  u'metadata': {},
  u'os_type': u'linux',
  u'power_state': 0,
  u'progress': 0,
  u'project_id': u'5820792',
  u'ramdisk_id': u'',
  u'reservation_id': u'r-j...mm',
  u'root_device_name': None,
  u'root_gb': 10,
  u'user_data': u'',
  u'user_id': u'162201',
  u'uuid': u'7c210...ed8960',
  u'vcpus': 4,
  u'vm_mode': None,
  u'vm_state': u'building'},
   u'instance_type': {u'created_at': None,
  u'deleted': False,
  u'deleted_at': None,
  u'ephemeral_gb': 0,
  u'extra_specs': {},
  u'flavorid': u'1',
  u'id': 1,
  u'memory_mb': 256,
  u'name': u'256MB instance',
  u'root_gb': 10,
  u'rxtx_factor': 1.0,
  u'swap': 512,
  u'updated_at': None,
  u'vcpu_weight': 10,
  u'vcpus': 4},
   u'num_instances': 1,
   u'security_group': [u'default']},

  u'weighted_host': {u'host': u'compute-xx-yy-zz-20',
 u'weight': 4945.0}},

  u'priority': u'INFO',
  u'publisher_id': 

Re: [Openstack] Monitoring / Billing Architecture proposed

2012-04-24 Thread Sandy Walsh
I think we have support for this currently in some fashion, Dragon?

-S



On 04/24/2012 12:55 AM, Loic Dachary wrote:
 Metering needs to account for the volume of data sent to external network 
 destinations  ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or 
 the disk I/O etc. This kind of resource is billable.
 
 The information described at http://wiki.openstack.org/SystemUsageData will 
 be used by metering but other data sources need to be harvested as well.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using Nova APIs from Javascript: possible?

2012-04-24 Thread Sandy Walsh
Due to the redirect nature of the auth system we may need JSONP support
for this to work.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using Nova APIs from Javascript: possible?

2012-04-24 Thread Sandy Walsh


On 04/24/2012 11:19 AM, Nick Lothian wrote:
 JSONP is great, but won't work with POST requests.

Hmm, good point.

 I don't quite understand what Due to the redirect nature of the auth
 system means, though. 
 
 If I use a custom Webkit browser  allow cross domain XMLHttpRequests it
 works fine - I do a POST to /v2.0/tokens, get the token and then use
 that. What am I missing?

The Auth system will give you a token and then a new management url
where the actual commands are issued (the real Nova API endpoint). These
are often two different systems (domains), so cross-site requests are
mandatory.

-S



 Nick
 
 On Tue, Apr 24, 2012 at 8:57 PM, Sandy Walsh sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com wrote:
 
 Due to the redirect nature of the auth system we may need JSONP support
 for this to work.
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Monitoring / Billing Architecture proposed

2012-04-23 Thread Sandy Walsh
Flavor information is copied to the Instance table on creation so the
Flavors can change and still be tracked in the Instance. It may just
need to be sent in the notification payload.

The current events in the system are documented here:
http://wiki.openstack.org/SystemUsageData

-Sandy


On 04/23/2012 02:50 PM, Brian Schott wrote:
 So, we could build on this. No reason to reinvent, but we might want to
 expand the number of events.  I'm concerned about things like what
 happens when flavors change over time.  Maybe the answer is, always
 append to the flavor/instance-type table.  The code I remember and the
 admin interface that Ken wrote allowed you to modify flavors.  That
 would break billing unless you also track flavor modifications.
 
 -
 Brian Schott, CTO
 Nimbis Services, Inc.
 brian.sch...@nimbisservices.com mailto:brian.sch...@nimbisservices.com
 ph: 443-274-6064  fx: 443-274-6060
 
 
 
 On Apr 23, 2012, at 1:40 PM, Luis Gervaso wrote:
 
 I have been looking at : http://wiki.openstack.org/SystemUsageData

 On Mon, Apr 23, 2012 at 7:35 PM, Brian Schott
 brian.sch...@nimbisservices.com
 mailto:brian.sch...@nimbisservices.com wrote:

 Is there a document somewhere on what events the services emit?  

 -
 Brian Schott, CTO
 Nimbis Services, Inc.
 brian.sch...@nimbisservices.com
 mailto:brian.sch...@nimbisservices.com
 ph: 443-274-6064 tel:443-274-6064  fx: 443-274-6060
 tel:443-274-6060



 On Apr 23, 2012, at 12:39 PM, Monsyne Dragon wrote:

 This already exists in trunk.  The Notification system was
 designed specifically to feed billing and monitoring systems. 

 Basically, we don't want Nova/Glance/etc to be in the business of
 trying to determine billing logic, since it is different for
 pretty much everyone,  so we just emit notifications to a queue
 and the interested pull what they want, and aggregate according
 to their own rules. 

 On Apr 22, 2012, at 1:50 PM, Luis Gervaso wrote:

 Hi,

 I want to share the architecture i am developing in order to
 perform the monitorig / billing OpenStack support:

 1. AMQP Client which listen to RabbitMQ / QPid (this should be
 interchangeable) (Own Stuff or ServiceMix / Camel)
  
 2. Events should be stored on a NoSQL document oriented database
 (I think mongodb is perfect, since we can query in a super easy
 fashion)

 We have an existing system called Yagi
 (https://github.com/Cerberus98/yagi/) that listens to the
 notification queues and persists events to a Redis database.  It
 then provides feeds as ATOM formatted documents that a billing
 system can pull to aggregate data, It also can support PubSub
 notification of clients thru the pubsubhubub protocol, and push
 events to a long-term archiving store thru the AtomPub protocol. 

 That said, the notification system outputs its events as JSON, so
 it should be easy to pipe into a json document-oriented db if
 that's what you need. (we only use ATOM because we have a
 atom-based archiving/search/aggregation engine (it's open
 source: http://atomhopper.org/ ) our in-house systems already
 plug into. )




 3a. The monitoring system can pull/push MongoDB

 3b. The billing system can pull to create invoices 

 4. A mediation EIP should be necessary to integrate a
 billing/monitoring product. (ServiceMix / Camel)

 This is to receive your feedback. So please, critics are welcome!

 Cheers!

 -- 
 ---
 Luis Alberto Gervaso Martin
 Woorea Solutions, S.L
 CEO  CTO
 mobile: (+34) 627983344 tel:%28%2B34%29%20627983344
 luis@ mailto:luis.gerv...@gmail.comwoorea.es http://woorea.es/


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 --
 Monsyne M. Dragon
 OpenStack/Nova 
 cell 210-441-0965 tel:210-441-0965
 work x 5014190

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 -- 
 ---
 Luis Alberto Gervaso Martin
 Woorea Solutions, S.L
 CEO  CTO
 mobile: (+34) 627983344
 luis@ mailto:luis.gerv...@gmail.comwoorea.es http://woorea.es/

 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 

Re: [Openstack] Monitoring / Billing Architecture proposed

2012-04-23 Thread Sandy Walsh
StackTach is a Django-based web interface for capturing, displaying and
navigating OpenStack notifications

https://github.com/rackspace/stacktach

-S


On 04/23/2012 04:26 PM, Luis Gervaso wrote:
 Joshua,
 
 I have performed a create instance operation and here is an example data
 obtained from stable/essex rabbitmq nova catch all exchange.
 
 [*] Waiting for messages. To exit press CTRL+C
 
  [x] Received '{_context_roles: [admin], _msg_id:
 a2d13735baad4613b89c6132e0fa8302, _context_read_deleted: no,
 _context_request_id: req-d7ffbe78-7a9c-4d20-9ac5-3e56951526fe,
 args: {instance_id: 6, instance_uuid:
 e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7, host: ubuntu, project_id:
 c290118b14564257be26a2cb901721a2, rxtx_factor: 1.0},
 _context_auth_token: null, _context_is_admin: true,
 _context_project_id: null, _context_timestamp:
 2012-03-24T01:36:48.774891, _context_user_id: null, method:
 get_instance_nw_info, _context_remote_address: null}'
 
  [x] Received '{_context_roles: [admin], _msg_id:
 a1cb1cf61e5441c2a772b29d3cd54202, _context_read_deleted: no,
 _context_request_id: req-db34ba32-8bd9-4cd5-b7b5-43705a9e258e,
 args: {instance_id: 6, instance_uuid:
 e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7, host: ubuntu, project_id:
 c290118b14564257be26a2cb901721a2, rxtx_factor: 1.0},
 _context_auth_token: null, _context_is_admin: true,
 _context_project_id: null, _context_timestamp:
 2012-03-24T01:37:50.463586, _context_user_id: null, method:
 get_instance_nw_info, _context_remote_address: null}'
 
  [x] Received '{_context_roles: [admin], _msg_id:
 ebb0b1c340de4024a22eafec9d0a2d66, _context_read_deleted: no,
 _context_request_id: req-ddb51b2b-a29f-4aad-909d-3f7f79f053c4,
 args: {instance_id: 6, instance_uuid:
 e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7, host: ubuntu, project_id:
 c290118b14564257be26a2cb901721a2, rxtx_factor: 1.0},
 _context_auth_token: null, _context_is_admin: true,
 _context_project_id: null, _context_timestamp:
 2012-03-24T01:38:59.217333, _context_user_id: null, method:
 get_instance_nw_info, _context_remote_address: null}'
 
  [x] Received '{_context_roles: [Member], _msg_id:
 729535c00d224414a98286e9ce3475a9, _context_read_deleted: no,
 _context_request_id: req-b056a8cc-3542-41a9-9e58-8fb592086264,
 _context_auth_token: deb477655fba448e85199f7e559da77a,
 _context_is_admin: false, _context_project_id:
 df3827f76f714b1e8f31675caf84ae9d, _context_timestamp:
 2012-03-24T01:39:19.813393, _context_user_id:
 abe21eb7e6884547810f0a43c216e6a6, method:
 get_floating_ips_by_project, _context_remote_address: 192.168.1.41}'
 
  [x] Received '{_context_roles: [Member, admin],
 _context_request_id: req-45e6c2af-52c7-4de3-af6c-6b2f7520cfd5,
 _context_read_deleted: no, args: {request_spec:
 {num_instances: 1, block_device_mapping: [], image: {status:
 active, name: cirros-0.3.0-x86_64-uec, deleted: false,
 container_format: ami, created_at: 2012-03-20 17:37:08,
 disk_format: ami, updated_at: 2012-03-20 17:37:08, properties:
 {kernel_id: 6b700d25-3293-420a-82e4-8247d4b0da2a, ramdisk_id:
 22b10c35-c868-4470-84ef-54ae9f17a977}, min_ram: 0, checksum:
 2f81976cae15c16ef0010c51e3a6c163, min_disk: 0, is_public: true,
 deleted_at: null, id: f7d4bea2-2aed-4bf3-a5cb-db6a34c4a525,
 size: 25165824}, instance_type: {root_gb: 0, name: m1.tiny,
 deleted: false, created_at: null, ephemeral_gb: 0, updated_at:
 null, memory_mb: 512, vcpus: 1, flavorid: 1, swap: 0,
 rxtx_factor: 1.0, extra_specs: {}, deleted_at: null,
 vcpu_weight: null, id: 2}, instance_properties: {vm_state:
 building, ephemeral_gb: 0, access_ip_v6: null, access_ip_v4:
 null, kernel_id: 6b700d25-3293-420a-82e4-8247d4b0da2a, key_name:
 testssh, ramdisk_id: 22b10c35-c868-4470-84ef-54ae9f17a977,
 instance_type_id: 2, user_data: dGhpcyBpcyBteSB1c2VyIGRhdGE=,
 vm_mode: null, display_name: eureka, config_drive_id: ,
 reservation_id: r-xtzjx50j, key_data: ssh-rsa
 B3NzaC1yc2EDAQABgQDJ31tdayh1xnAY+JO/ZVdg5L83CsIU7qaOmFubdH7zlg2jjS9JmkPNANj99zx+UHg5F5JKGMef9M8VP/V89D5g0oIjIJtBdFpKOScBo3yJ1vteW5ItImH8h9TldymHf+CWNVY1oNNqzXqAb41xwUUDNvgeXHRZNnE6tmwZO0oC1Q==
 stack@ubuntu\n, root_gb: 0, user_id:
 abe21eb7e6884547810f0a43c216e6a6, uuid:
 40b5a1c5-bd4f-40ee-ae0a-73e0bc927431, root_device_name: null,
 availability_zone: null, launch_time: 2012-03-24T01:39:52Z,
 metadata: {}, display_description: eureka, memory_mb: 512,
 launch_index: 0, vcpus: 1, locked: false, image_ref:
 f7d4bea2-2aed-4bf3-a5cb-db6a34c4a525, architecture: null,
 power_state: 0, auto_disk_config: null, progress: 0, os_type:
 null, project_id: df3827f76f714b1e8f31675caf84ae9d, config_drive:
 }, security_group: [default]}, is_first_time: true,
 filter_properties: {scheduler_hints: {}}, topic: compute,
 admin_password: SKohh79r956J, injected_files: [],
 requested_networks: null}, _context_auth_token:
 deb477655fba448e85199f7e559da77a, _context_is_admin: false,
 _context_project_id: df3827f76f714b1e8f31675caf84ae9d,
 _context_timestamp: 2012-03-24T01:39:52.089383, _context_user_id:
 

Re: [Openstack] New Gerrit version (and server)

2012-04-13 Thread Sandy Walsh
Sounds awesome! Looking forward to Draft Changes, much needed.

-S



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
James E. Blair [cor...@inaugust.com]
Sent: Thursday, April 12, 2012 8:23 PM
To: OpenStack Mailing List
Subject: [Openstack] New Gerrit version (and server)

Hi,

We've just upgraded Gerrit to version 2.3.  There are a lot of changes
behind the scenes that we've been looking forward to (like being able to
store data in innodb rather than myisam tables for extra data
longevity).  And there are a few visible changes that may be of interest
to OpenStack developers.

One new addition in 2.3 is draft changes.  The idea behind a draft
change in Gerrit is that it is a change that is not ready for merging,
or even general code review, but you would like to share it with some
people to get early comments.  If you upload a change as a draft, by
default, no one else can see it.  You must explicitly add each person
you would like to share it with as a reviewer.  Reviewers you add can
leave comments, but can not vote at this stage.  You can continue to
upload new patchsets to the change as it evolves, and once it is ready
for general review, you can click the Publish button.  It will then
become a normal change in Gerrit that everyone can see, including the
earlier reviews from the draft stage.  This is a one way transition;
once a draft is published, it can't be made a draft again.

If you're using git-review from source or the latest version from PyPI
(version 1.16, released today), you can easily upload a draft change by
adding the -D option (eg, git review -D).  Earlier versions of
git-review also have the -D option, but the git ref that Gerrit uses
to indicate a change should be a draft was changed between the 2.3
release candidate and the final release; so if using -D results in an
error, you may need to upgrade.

You may notice some changes to the diff view.  Notably, the header which
contained all of the possible viewing options has been split up into
several parts; you can switch between them by selecting options that
show up under the menu at the top.  I recommend setting Retain Header
On File Switch under the Preferences section, as it is a nicer
experience when changing files.

Another notable new feature is the ability to add a group to the list of
reviewers for a change.  Just type in the name of the group and click
Add Reviewer and all of the individuals in the group will be added to
the list of reviewers (and will see the change on their review
requests list.

Finally, we've modified some of our local OpenStack style changes so
that it is easier for us to track upstream changes in layout.  It should
mean a little more consistency throughout the interface, though we
weren't able to keep the alternating row colors on the main table
without a disproportionate amount of effort.  Do note that you can click
on a line in a table, and it will be highlighted to improve legibility.

We've tried to give this as much testing as possible before moving it
into production.  If you encounter any issues, please let us know on IRC
(mtaylor, jeblair, LinuxJedi), via email at
openstack-ci-adm...@lists.launchpad.net, or you can file a bug at:

  https://bugs.launchpad.net/openstack-ci/

Thanks,

Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] removing nova-direct-api

2012-04-09 Thread Sandy Walsh
/me wears black armband.

Would love to see DirectApi v2 be the de facto OS API implementation.
The spec could be the same, it's just a (better) implementation detail.

-S

On 04/09/2012 03:58 PM, Vishvananda Ishaya wrote:
 +1 to removal.  I just tested to see if it still works, and due to our
 policy checking and loading objects before sending them into
 compute.api, it no longer functions. Probably wouldn't be too hard to
 fix it, but clearly no one is using it so lets axe it.
 
 Vish
 
 On Apr 9, 2012, at 11:19 AM, Joe Gordon wrote:
 
 Hi All,

 The other day I noticed that in addition to EC2 and OpenStack APIs
 there is a third API type: nova-direct-api.  As best I can tell,
 this was used early on for development/testing before the EC2 and
 OpenStack APIs were mature.

 My question is, since most of the code hasn't been touched in over a
 year and we have two mature documented APIs, is anyone using this?  If
 not, I propose to remove it.


 Proposed Change:  https://review.openstack.org/6375


 best,
 Joe
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova-orchestration] Preliminary analysis of SpiffWorkflow

2012-04-06 Thread Sandy Walsh
That's great Ziad ... nice work!

Having written one of these libraries before I know the challenges are mostly 
conceptual, but not terribly technical (fortunately).

Generally the separation between WorkflowSpec and Workflow or TaskSpec and Task 
is the same as Class and Instance. You define the spec (class) and apply it to 
many running workflows (instances). 

Side note: this can get you into the problem of versioning too. You define a 
workflow, spawn 10 instances of it and then change the spec ... do the existing 
instances change or continue running the old spec? Fun stuff.

Look forward to seeing that larger project!

-S


From: Ziad Sawalha
Sent: Friday, April 06, 2012 4:53 PM
To: Sriram Subramanian; Dugger, Donald D; Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re: [Openstack] [Nova-orchestration] Preliminary analysis of 
SpiffWorkflow

Here's a link to my analysis so far:
http://wiki.openstack.org/NovaOrchestration/WorkflowEngines/SpiffWorkflow

It looks good, but I won't pass a final verdict until I have completed a
working project in it. I have one in progress and will let ya know when
it's done.

Z

On 4/3/12 4:56 PM, Ziad Sawalha ziad.sawa...@rackspace.com wrote:

Just confirming what Sandy said; I am playing around with SpiffWorkflow.
I'll post my findings when I'm done on the wiki under the Nova
Orchestration page.

So far I've found some of the documentation lacking and concepts
confusing, which has resulted in a steep learning curve and made it
difficult to integrate into something like RabbitMQ (for long-running
tasks). But the thinking behind it (http://www.workflowpatterns.com/)
seems sound and I will continue to investigate it.

Z

On 3/29/12 5:56 PM, Sriram Subramanian sri...@computenext.com wrote:

Guys,

Sorry for missing the meeting today. Thanks for the detailed summary/
logs. I am cool with the action item : #action sriram to update the
Orchestration session proposal. This is my understanding the logs of
things to be updated in the blueprint:

1) orchestration service provides state management with client side APIs
2) add API design and state storage as topics for the orchestration
session at the Summit
3) add implementation plan as session topic

Please correct me if I missed anything.

Just to bring everyone to same page, here are the new links

Folsom BluePrint:
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Folsom Session proposal:
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Wiki: http://wiki.openstack.org/NovaOrchestration (I will clean this up
tonight)

Maoy: Sandy's pointers are in this email thread (which n0ano meant to fwd
you)
Mikeyp: Moving the conversation to the main mailing list per your
suggestion

Thanks,
_Sriram

-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
Sent: Thursday, March 29, 2012 12:52 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

NP, I'll be on the IRC for whoever wants to talk.  Maybe we can try and
do the sync you want via email, that's always been my favorite way to
communicate (it allows you to focus thoughts and deals with timezones
nicely).

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


-Original Message-
From: Sriram Subramanian [mailto:sri...@computenext.com]
Sent: Thursday, March 29, 2012 1:45 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Dugger, Donald D; Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

I will most likely be running little late from my 12 - 1 meeting which
doesn't seem to be ending anytime now :(

I haven't gotten a chance to submit a branch yet. Hopefully by this week
end (at least a bare bones)

If you are available for offline sync later this week - I would
appreciate that. Apologies for possibly missing the sync.

Thanks,
-Sriram

-Original Message-
From:
nova-orchestration-bounces+sriram=computenext@lists.launchpad.net
[mailto:nova-orchestration-bounces+sriram=computenext.com@lists.launchpad
.
net] On Behalf Of Sriram Subramanian
Sent: Wednesday, March 28, 2012 2:44 PM
To: Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net
Subject: Re: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

Thanks for the pointers Sandy. I will try to spend some cycles on the
branch per your suggestion; we will also discuss more tomorrow.

Yes, BP is not far off from last summit, and would like to flush out more
for this summit.

Thanks,
-Sriram

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
Sent: Wednesday, March 28, 2012 11:31 AM
To: Sriram Subramanian
Cc: Michael Pittaro; Dugger, Donald D (donald.d.dug...@intel.com);
nova

Re: [Openstack] [Nova-orchestration] Preliminary analysis of SpiffWorkflow

2012-04-06 Thread Sandy Walsh
From what I've seen Spliff doesn't specify ... the containing application has 
to deal with persistence. 

-S


From: Yun Mao [yun...@gmail.com]
Sent: Friday, April 06, 2012 5:38 PM
To: Ziad Sawalha
Cc: Sriram Subramanian; Dugger, Donald D; Sandy Walsh; 
nova-orchestrat...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re: [Nova-orchestration] [Openstack] Preliminary analysis of 
SpiffWorkflow

Hi Ziad,

thanks for the great work. Do we know how the states are persisted in
Spiff? Thanks,

Yun

On Fri, Apr 6, 2012 at 3:53 PM, Ziad Sawalha ziad.sawa...@rackspace.com wrote:
 Here's a link to my analysis so far:
 http://wiki.openstack.org/NovaOrchestration/WorkflowEngines/SpiffWorkflow

 It looks good, but I won't pass a final verdict until I have completed a
 working project in it. I have one in progress and will let ya know when
 it's done.

 Z

 On 4/3/12 4:56 PM, Ziad Sawalha ziad.sawa...@rackspace.com wrote:

Just confirming what Sandy said; I am playing around with SpiffWorkflow.
I'll post my findings when I'm done on the wiki under the Nova
Orchestration page.

So far I've found some of the documentation lacking and concepts
confusing, which has resulted in a steep learning curve and made it
difficult to integrate into something like RabbitMQ (for long-running
tasks). But the thinking behind it (http://www.workflowpatterns.com/)
seems sound and I will continue to investigate it.

Z

On 3/29/12 5:56 PM, Sriram Subramanian sri...@computenext.com wrote:

Guys,

Sorry for missing the meeting today. Thanks for the detailed summary/
logs. I am cool with the action item : #action sriram to update the
Orchestration session proposal. This is my understanding the logs of
things to be updated in the blueprint:

1) orchestration service provides state management with client side APIs
2) add API design and state storage as topics for the orchestration
session at the Summit
3) add implementation plan as session topic

Please correct me if I missed anything.

Just to bring everyone to same page, here are the new links

Folsom BluePrint:
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Folsom Session proposal:
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Wiki: http://wiki.openstack.org/NovaOrchestration (I will clean this up
tonight)

Maoy: Sandy's pointers are in this email thread (which n0ano meant to fwd
you)
Mikeyp: Moving the conversation to the main mailing list per your
suggestion

Thanks,
_Sriram

-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
Sent: Thursday, March 29, 2012 12:52 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

NP, I'll be on the IRC for whoever wants to talk.  Maybe we can try and
do the sync you want via email, that's always been my favorite way to
communicate (it allows you to focus thoughts and deals with timezones
nicely).

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


-Original Message-
From: Sriram Subramanian [mailto:sri...@computenext.com]
Sent: Thursday, March 29, 2012 1:45 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Dugger, Donald D; Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

I will most likely be running little late from my 12 - 1 meeting which
doesn't seem to be ending anytime now :(

I haven't gotten a chance to submit a branch yet. Hopefully by this week
end (at least a bare bones)

If you are available for offline sync later this week - I would
appreciate that. Apologies for possibly missing the sync.

Thanks,
-Sriram

-Original Message-
From:
nova-orchestration-bounces+sriram=computenext@lists.launchpad.net
[mailto:nova-orchestration-bounces+sriram=computenext.com@lists.launchpad
.
net] On Behalf Of Sriram Subramanian
Sent: Wednesday, March 28, 2012 2:44 PM
To: Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net
Subject: Re: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

Thanks for the pointers Sandy. I will try to spend some cycles on the
branch per your suggestion; we will also discuss more tomorrow.

Yes, BP is not far off from last summit, and would like to flush out more
for this summit.

Thanks,
-Sriram

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
Sent: Wednesday, March 28, 2012 11:31 AM
To: Sriram Subramanian
Cc: Michael Pittaro; Dugger, Donald D (donald.d.dug...@intel.com);
nova-orchestrat...@lists.launchpad.net
Subject: Thoughts on Orchestration (was Re: Documentation on Caching)

Ah, gotcha.

I don't think the caching stuff will really affect the Orchestration
layer all that much. Certainly the Cells stuff that comstud is working on
should be considered.

The BP isn't really too far off

Re: [Openstack] [Nova-orchestration] Thoughts on Orchestration (was Re: Documentation on Caching)

2012-04-03 Thread Sandy Walsh
Can't wait to hear about it Ziad!

Very cool!

-S

From: Ziad Sawalha
Sent: Tuesday, April 03, 2012 6:56 PM
To: Sriram Subramanian; Dugger, Donald D; Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re: [Openstack] [Nova-orchestration] Thoughts on Orchestration (was 
Re: Documentation on Caching)

Just confirming what Sandy said; I am playing around with SpiffWorkflow.
I'll post my findings when I'm done on the wiki under the Nova
Orchestration page.

So far I've found some of the documentation lacking and concepts
confusing, which has resulted in a steep learning curve and made it
difficult to integrate into something like RabbitMQ (for long-running
tasks). But the thinking behind it (http://www.workflowpatterns.com/)
seems sound and I will continue to investigate it.

Z

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Running code on instance start/terminate

2012-03-28 Thread Sandy Walsh
Look at
https://github.com/rackspace/stacktach/blob/master/worker.py
(ignore the _process() call, just look at how the queue listeners are
set up)

my worker_conf.py looks something like
DEPLOYMENTS = [
dict(
tenant_id=1,
url='http://stacktach.example.com',
rabbit_host=10.0.0.1,
rabbit_port=5672,
rabbit_userid=nova-staging,
rabbit_password=password,
rabbit_virtual_host=staging),
dict(
tenant_id=2,
url='http://stacktach.example.com',
rabbit_host=10.99.0.1,
rabbit_port=5672,
rabbit_userid=nova,
rabbit_password=password,
rabbit_virtual_host=production),
   ]


Or the queue listeners in
https://github.com/Cerberus98/yagi

Hope it helps!
-Sandy


On 03/28/2012 05:27 PM, Leander Bessa wrote:
From what i have figured out so far, the exchange queue is nova and
 the routing key in your case is notifications.info
 http://notifications.info.
 
 On Wed, Mar 28, 2012 at 9:05 PM, Rogério Vinhal Nunes
 roge...@dcc.ufmg.br mailto:roge...@dcc.ufmg.br wrote:
 
 I'm trying to find out information to make an application that
 consumes the compute info, but I'm having some trouble. Is there a
 better documentation I could follow and the precise details of the
 queue/exchange/routing_key needed to take information at the end of
 each run-instance and terminate-instance?
 
 Also, I'm using amqplib and I saw that Openstack uses an internal
 library named Carrot that uses amqplib. Would it be better to use it
 instead?
 
 Em 27 de março de 2012 12:42, Sandy Walsh sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com escreveu:
 
 I believe '.exists' is sent via a periodic update in
 compute.manager.
 
 
 
 On 03/27/2012 12:08 PM, Leander Bessa wrote:
  Hello,
 
  I've been following this topic and i've been trying to receive the
  notifications directly from rabbitmq with python+pika. So i far
  i've managed to receive the start/terminate events and
 everything in
  between. What i am unable to find though is the topic regarding
  _compute.instance.exists_
  from http://wiki.openstack.org/SystemUsageData. Does this even
 exist, or
  is there some extra configuration required with nova?
 
  Regards,
 
  Leander
 
  On Mon, Mar 26, 2012 at 4:54 PM, Russell Bryant
 rbry...@redhat.com mailto:rbry...@redhat.com
  mailto:rbry...@redhat.com mailto:rbry...@redhat.com wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 03/26/2012 11:48 AM, Russell Bryant wrote:
   On 03/26/2012 10:15 AM, Rogério Vinhal Nunes wrote:
   Hello,
  
   I'm developing a application to work along with
 openstack. My
   application needs to keep track of all instances being
 started
   or terminated such as feeding it information about the
 location,
   status and other information about launched and terminated
   instances. The current version makes timed queries to
 OpenStack
   database, but this is showing to be a little consuming and
   inefficient, so I would like to add a portion of code
 to make
   OpenStack actively feed my application information
 whenever an
   instance changes its status or location.
  
   What is the least intrusive way to do that? It would be
 very nice
   if OpenStack provided a way to run code on these situations
   without actually changing any code, such as defining a
 directory
   of scripts to run in every instance status change.
  
   Check out the notifications system:
  
   http://wiki.openstack.org/NotificationSystem
  
 
  That wasn't the page I thought it was ... I meant:
 
 http://wiki.openstack.org/SystemUsageData
 
  You can consume these events via AMQP if you configure
 nova to use the
  rabbit notifier.
 
  - --
  Russell Bryant
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.12 (GNU/Linux)
  Comment: Using GnuPG with Mozilla -
 http://enigmail.mozdev.org/
 
 
 iEYEARECAAYFAk9wkSgACgkQFg9ft4s9SAYFuACfUu23qtxiH6WLCJNyd9gBf8i1
  FwQAnifjwWkFHYxo+KhYt8TAWEzTaMYZ
  =UWlH
  -END PGP SIGNATURE-
 
  ___
  Mailing list: https://launchpad.net/~openstack
 https://launchpad.net

Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
Thanks ... that's good feedback and we were discussing cache
invalidation issues today.

Any tips or suggestions?

-S



On 03/22/2012 09:28 PM, Joshua Harlow wrote:
 Just from experience.
 
 They do a great job. But the killer thing about caching is how u do the
 cache invalidation.
 
 Just caching stuff is easy-peasy, making sure it is invalidated on all
 servers in all conditions, not so easy...
 
 On 3/22/12 4:26 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 
 We're doing tests to find out where the bottlenecks are, caching is the
 most obvious solution, but there may be others. Tools like memcache do a
 really good job of sharing memory across servers so we don't have to
 reinvent the wheel or hit the db at all.
 
 In addition to looking into caching technologies/approaches we're gluing
 together some tools for finding those bottlenecks. Our first step will
 be finding them, then squashing them ... however.
 
 -S
 
 On 03/22/2012 06:25 PM, Mark Washenberger wrote:
  What problems are caching strategies supposed to solve?
 
  On the nova compute side, it seems like streamlining db access and
  api-view tables would solve any performance problems caching would
  address, while keeping the stale data management problem small.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
Yup, makes sense. Thanks for the feedback. I agree that the external
caches are troublesome and we'll likely be focusing on the internal
ones. Whether that manifests itself as a memcache-like implementation or
another db view is unknown.

The other thing about in-process caching I like is the ability to have
it in a common (nova-common?) library where we can easily compute
hit/miss ratios and adjust accordingly.

-S


On 03/23/2012 12:02 AM, Mark Washenberger wrote:
 This is precisely my concern.
 
 It must be brought up that with Rackspace Cloud Servers, nearly
 all client codes routinely submit requests with a query parameter 
 cache-busting=some random string just to get around problems with
 cache invalidation. And woe to the client that does not.
 
 I get the feeling that once trust like this is lost, a project has
 a hard time regaining it. I'm not saying that we can avoid
 inconsistency entirely. Rather, I believe we will have to embrace
 some eventual-consistency models to enable the performance and
 scale we will ultimately attain. But I just get the feeling that
 generic caches are really only appropriate for write-once or at
 least write-rarely data. So personally I would rule out external
 caches entirely and try to be very judicious in selecting internal
 caches as well.
 
 Joshua Harlow harlo...@yahoo-inc.com said:
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 Just from experience.

 They do a great job. But the killer thing about caching is how u do the cache
 invalidation.

 Just caching stuff is easy-peasy, making sure it is invalidated on all 
 servers in
 all conditions, not so easy...

 On 3/22/12 4:26 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 We're doing tests to find out where the bottlenecks are, caching is the
 most obvious solution, but there may be others. Tools like memcache do a
 really good job of sharing memory across servers so we don't have to
 reinvent the wheel or hit the db at all.

 In addition to looking into caching technologies/approaches we're gluing
 together some tools for finding those bottlenecks. Our first step will
 be finding them, then squashing them ... however.

 -S

 On 03/22/2012 06:25 PM, Mark Washenberger wrote:
 What problems are caching strategies supposed to solve?

 On the nova compute side, it seems like streamlining db access and
 api-view tables would solve any performance problems caching would
 address, while keeping the stale data management problem small.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
(resent to list as I realized I just did a Reply)

Cool! This is great stuff. Look forward to seeing the branch.

I started working on a similar tool that takes the data collected from
Tach and fetches the data from Graphite to look at the performance
issues (no changes to nova trunk requires since Tach is awesome).

It's a shell of an idea yet, but the basics work:
https://github.com/ohthree/novaprof

But if there is something already existing, I'm happy to kill it off.

I don't doubt for a second the db is the culprit for many of our woes.

The thing I like about internal caching using established tools is that
it works for db issues too without having to resort to custom tables.
SQL query optimization, I'm sure, will go equally far.

Thanks again for the great feedback ... keep it comin'!

-S


On 03/22/2012 11:53 PM, Mark Washenberger wrote:
 Working on this independently, I created a branch with some simple
 performance logging around the nova-api, and individually around 
 glance, nova.db, and nova.rpc calls. (Sorry, I only have a local
 copy and its on a different computer right now, and probably needs
 a rebase. I will rebase and publish it on GitHub tomorrow.) 
 
 With this logging, I could get some simple profiling that I found
 very useful. Here is a GH project with the analysis code as well
 as some nova-api logs I was using as input. 
 
 https://github.com/markwash/nova-perflog
 
 With these tools, you can get a wall-time profile for individual
 requests. For example, looking at one server create request (and
 you can run this directly from the checkout as the logs are saved
 there):
 
 markw@poledra:perflogs$ cat nova-api.vanilla.1.5.10.log | python 
 profile-request.py req-3cc0fe84-e736-4441-a8d6-ef605558f37f
 keycountavg
 nova.api.openstack.wsgi.POST   1  0.657
 nova.db.api.instance_update1  0.191
 nova.image.show1  0.179
 nova.db.api.instance_add_security_group1  0.082
 nova.rpc.cast  1  0.059
 nova.db.api.instance_get_all_by_filters1  0.034
 nova.db.api.security_group_get_by_name 2  0.029
 nova.db.api.instance_create1  0.011
 nova.db.api.quota_get_all_by_project   3  0.003
 nova.db.api.instance_data_get_for_project  1  0.003
 
 key  count  total
 nova.api.openstack.wsgi  1  0.657
 nova.db.api 10  0.388
 nova.image   1  0.179
 nova.rpc 1  0.059
 
 All times are in seconds. The nova.rpc time is probably high
 since this was the first call since server restart, so the
 connection handshake is probably included. This is also probably
 1.5 months stale.
 
 The conclusion I reached from this profiling is that we just plain
 overuse the db (and we might do the same in glance). For example,
 whenever we do updates, we actually re-retrieve the item from the
 database, update its dictionary, and save it. This is double the
 cost it needs to be. We also handle updates for data across tables
 inefficiently, where they could be handled in single database round
 trip.
 
 In particular, in the case of server listings, extensions are just
 rough on performance. Most extensions hit the database again
 at least once. This isn't really so bad, but it clearly is an area
 where we should improve, since these are the most frequent api
 queries.
 
 I just see a ton of specific performance problems that are easier
 to address one by one, rather than diving into a general (albeit
 obvious) solution such as caching.
 
 
 Sandy Walsh sandy.wa...@rackspace.com said:
 
 We're doing tests to find out where the bottlenecks are, caching is the
 most obvious solution, but there may be others. Tools like memcache do a
 really good job of sharing memory across servers so we don't have to
 reinvent the wheel or hit the db at all.

 In addition to looking into caching technologies/approaches we're gluing
 together some tools for finding those bottlenecks. Our first step will
 be finding them, then squashing them ... however.

 -S

 On 03/22/2012 06:25 PM, Mark Washenberger wrote:
 What problems are caching strategies supposed to solve?

 On the nova compute side, it seems like streamlining db access and
 api-view tables would solve any performance problems caching would
 address, while keeping the stale data management problem small.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
Was reading up some more on cache invalidation schemes last night. The
best practice approach seems to be using a sequence ID in the key. When
you want to invalidate a large set of keys, just bump the sequence id.

This could easily be handled with a notifier that listens to instance
state changes.

Thoughts?


On 03/22/2012 09:28 PM, Joshua Harlow wrote:
 Just from experience.
 
 They do a great job. But the killer thing about caching is how u do the
 cache invalidation.
 
 Just caching stuff is easy-peasy, making sure it is invalidated on all
 servers in all conditions, not so easy...
 
 On 3/22/12 4:26 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 
 We're doing tests to find out where the bottlenecks are, caching is the
 most obvious solution, but there may be others. Tools like memcache do a
 really good job of sharing memory across servers so we don't have to
 reinvent the wheel or hit the db at all.
 
 In addition to looking into caching technologies/approaches we're gluing
 together some tools for finding those bottlenecks. Our first step will
 be finding them, then squashing them ... however.
 
 -S
 
 On 03/22/2012 06:25 PM, Mark Washenberger wrote:
  What problems are caching strategies supposed to solve?
 
  On the nova compute side, it seems like streamlining db access and
  api-view tables would solve any performance problems caching would
  address, while keeping the stale data management problem small.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh


On 03/23/2012 09:44 AM, Gabe Westmaas wrote:
 I'd prefer to just set a different expectation for the user.  Rather than 
 worrying about state change and invalidation, lets just set the expectation 
 that the system as a whole is eventually consistent.  I would love to prevent 
 any cache busting strategies or expectations as well as anything that 
 requires something other than time based data refreshing.  We can all agree, 
 I hope, that there is some level of eventual consistency even without caching 
 in our current system.  The fact is that db updates are not instantaneous 
 with other changes in the system; see snapshotting, instance creation, etc.  

I think that's completely valid. The in-process caching schemes are
really just implementation techniques. The end-result (of view tables vs
key/value in-memory dicts vs whatever) is the same.


 What I'd like to see is additional fields included in the API response that 
 how old this particular piece of data is.  This way the consumer can decide 
 if they need to be concerned about the fact that this state hasn't changed, 
 and it allows operators to tune their system to whatever their deployments 
 can handle.  If we are exploring caching, I think that gives us the advantage 
 of not a lot of extra code that worries about invalidation, allowing 
 deployers to not use caching at all if its unneeded, and paves the way for 
 view tables in large deployments which I think is important when we are 
 thinking about this on a large scale.

My fear is clients will simply start to poll the system until new data
magically appears. An alternative might be, rather than say how old the
data is, how long until the cache expires?


 
 Gabe
 
 -Original Message-
 From: openstack-
 bounces+gabe.westmaas=rackspace@lists.launchpad.net
 [mailto:openstack-
 bounces+gabe.westmaas=rackspace@lists.launchpad.net] On Behalf Of
 Sandy Walsh
 Sent: Friday, March 23, 2012 7:58 AM
 To: Joshua Harlow
 Cc: openstack
 Subject: Re: [Openstack] Caching strategies in Nova ...

 Was reading up some more on cache invalidation schemes last night. The
 best practice approach seems to be using a sequence ID in the key. When
 you want to invalidate a large set of keys, just bump the sequence id.

 This could easily be handled with a notifier that listens to instance state
 changes.

 Thoughts?


 On 03/22/2012 09:28 PM, Joshua Harlow wrote:
 Just from experience.

 They do a great job. But the killer thing about caching is how u do
 the cache invalidation.

 Just caching stuff is easy-peasy, making sure it is invalidated on all
 servers in all conditions, not so easy...

 On 3/22/12 4:26 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 We're doing tests to find out where the bottlenecks are, caching is the
 most obvious solution, but there may be others. Tools like memcache do
 a
 really good job of sharing memory across servers so we don't have to
 reinvent the wheel or hit the db at all.

 In addition to looking into caching technologies/approaches we're gluing
 together some tools for finding those bottlenecks. Our first step will
 be finding them, then squashing them ... however.

 -S

 On 03/22/2012 06:25 PM, Mark Washenberger wrote:
  What problems are caching strategies supposed to solve?
 
  On the nova compute side, it seems like streamlining db access and
  api-view tables would solve any performance problems caching would
  address, while keeping the stale data management problem small.
 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
You can. The sanctioned approach is to use Yagi with a feed into
something like PubSubHubBub that lives on the public interweeb.

It's just an optional component.

-S

On 03/23/2012 12:20 PM, Kevin L. Mitchell wrote:
 On Fri, 2012-03-23 at 13:43 +, Gabe Westmaas wrote:
 However, I kind of expect that many users
 will still poll even if they know they won't get new data until X
 time. 
 
 I wish there was some kind of way for us to issue push notifications to
 the client, i.e., have the client register some sort of callback and
 what piece of data / state change they're interested in, then nova would
 call that callback when the condition occurred.  It probably wouldn't
 stop polling, but we could ratchet down rate limits to encourage users
 to use the callback mechanism.
 
 Of course, then there's the problem of, what if the user is behind a
 firewall or some sort of NAT... :/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
Ugh (reply vs reply-all again)

On 03/23/2012 02:58 PM, Joshua Harlow wrote:
 Right,
 
 Lets fix the problem, not add a patch that hides the problem.
 
 U can’t put lipstick on a pig, haha. Its still a pig...

When stuff is expensive to compute, caching is the only option (yes?).
Whether that lives in memcache, a db or in a dict. Tuning sql queries
will only get us so far. I think creating custom view tables is a
laborious and error prone tact ... additionally you get developers that
start to depend on the view tables as gospel.

Or am I missing something here?

-S


 On 3/22/12 8:02 PM, Mark Washenberger
 mark.washenber...@rackspace.com wrote:
 
 This is precisely my concern.
 
 It must be brought up that with Rackspace Cloud Servers, nearly
 all client codes routinely submit requests with a query parameter
 cache-busting=some random string just to get around problems with
 cache invalidation. And woe to the client that does not.
 
 I get the feeling that once trust like this is lost, a project has
 a hard time regaining it. I'm not saying that we can avoid
 inconsistency entirely. Rather, I believe we will have to embrace
 some eventual-consistency models to enable the performance and
 scale we will ultimately attain. But I just get the feeling that
 generic caches are really only appropriate for write-once or at
 least write-rarely data. So personally I would rule out external
 caches entirely and try to be very judicious in selecting internal
 caches as well.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
Was the db on a separate server or loopback?

On 03/23/2012 05:26 PM, Mark Washenberger wrote:
 
 
 Johannes Erdfelt johan...@erdfelt.com said:
 

 MySQL isn't exactly slow and Nova doesn't have particularly large
 tables. It looks like the slowness is coming from the network and how
 many queries are being made.

 Avoiding joins would mean even more queries, which looks like it would
 slow it down even further.

 
 This is exactly what I saw in my profiling. More complex queries did
 still seem to take longer than less complex ones, but it was a second
 order effect compared to the overall volume of queries. 
 
 I'm not sure that network was the culprit though, since my ping
 roundtrip time was small relative to the wall time I measured for each
 nova.db.api call.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Sandy Walsh
 is an area
 where we should improve, since these are the most frequent api
 queries.
 
 I just see a ton of specific performance problems that are easier
 to address one by one, rather than diving into a general (albeit
 obvious) solution such as caching.
 
 
 Sandy Walsh sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com said:
 
  We're doing tests to find out where the bottlenecks are,
 caching is the
  most obvious solution, but there may be others. Tools like
 memcache do a
  really good job of sharing memory across servers so we don't
 have to
  reinvent the wheel or hit the db at all.
 
  In addition to looking into caching technologies/approaches
 we're gluing
  together some tools for finding those bottlenecks. Our first
 step will
  be finding them, then squashing them ... however.
 
  -S
 
  On 03/22/2012 06:25 PM, Mark Washenberger wrote:
  What problems are caching strategies supposed to solve?
 
  On the nova compute side, it seems like streamlining db
 access and
  api-view tables would solve any performance problems caching
 would
  address, while keeping the stale data management problem small.
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Caching strategies in Nova ...

2012-03-22 Thread Sandy Walsh
o/

Vek and myself are looking into caching strategies in and around Nova.

There are essentially two approaches: in-process and external (proxy).
The in-process schemes sit in with the python code while the external
ones basically proxy the the HTTP requests.

There are some obvious pro's and con's to each approach. The external is
easier for operations to manage, but in-process allows us greater
control over the caching (for things like caching db calls and not just
HTTP calls). But, in-memory also means more code, more memory usage on
the servers, monolithic services, limited to python based solutions,
etc. In-process also gives us access to tools like Tach
https://github.com/ohthree/tach for profiling performance.

I see Jesse recently landed a branch that touches on the in-process
approach:
https://github.com/openstack/nova/commit/1bcf5f5431d3c9620596f5329d7654872235c7ee#nova/common/memorycache.py

I don't know if people think putting caching code inside nova is a good
or bad idea. If we do continue down this road, it would be nice to make
it a little more modular/plug-in-based (YAPI .. yet another plug-in).
Perhaps a hybrid solution is required?

We're looking at tools like memcache, beaker, varnish, etc.

Has anyone already started down this road already? Any insights to
share? Opinions? (summit talk?)

What are Glance, Swift, Keystone (lite?) doing?

-S

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-22 Thread Sandy Walsh
We're doing tests to find out where the bottlenecks are, caching is the
most obvious solution, but there may be others. Tools like memcache do a
really good job of sharing memory across servers so we don't have to
reinvent the wheel or hit the db at all.

In addition to looking into caching technologies/approaches we're gluing
together some tools for finding those bottlenecks. Our first step will
be finding them, then squashing them ... however.

-S

On 03/22/2012 06:25 PM, Mark Washenberger wrote:
 What problems are caching strategies supposed to solve?
 
 On the nova compute side, it seems like streamlining db access and
 api-view tables would solve any performance problems caching would
 address, while keeping the stale data management problem small.
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova zone and availability_zone

2012-03-20 Thread Sandy Walsh
Availability Zone is an EC2 concept. Zones were a sharding scheme for Nova. 
Zones are being renamed to Cells to avoid further confusion. Availability Zones 
will remain the same.

Hope it helps!
-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Nicolae Paladi [n.pal...@gmail.com]
Sent: Tuesday, March 20, 2012 6:55 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] nova zone and availability_zone

Hi all,

What is the difference between nova zone(s) and availability_zone?
In a new deployment, the *services* table in the nova db contains an 
availability_zone
column (which is 'nova', but default).

If that is not the same as nova zones  (which are logical deployments, as far 
as I understood), where is information
about zones stored?

The only documentation about zones in openstack that I could find is here:
http://nova.openstack.org/devref/zone.html


is there anything on availability zones?

Cheers,
/Nicolae.




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] using xenapi hypervisor

2012-03-20 Thread Sandy Walsh
http://wiki.openstack.org/XenServer/Development#Legacy_way_to_Prepare_XenServer



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Eduardo Nunes [eduardo.ke...@gmail.com]
Sent: Monday, March 19, 2012 3:19 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [OpenStack] using xenapi hypervisor

I wanna use the xenpi as a hypervisor, i see there are many tutorials, but 
almost all of then is using the devstack, i don't wanna use the devstack, is 
there a tutorial about how i create a domU, what image i sould use on the domU, 
an the conf of xen?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Announcing StackTach ...

2012-02-21 Thread Sandy Walsh
Thanks ... I'll have a look!

-S


On 02/21/2012 04:57 PM, Cole wrote:
 very cool.  If there is any interest in extending the tool and making it
 pluggable to work with other wire protocols i'd think the openmama
 http://www.openmama.org/ project would be an interesting possibility.
 
 nice work!
 
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Announcing StackTach ...

2012-02-21 Thread Sandy Walsh
Thanks y'all ... I'll chat with The Powers to see where that fits. But
I agree it's a good idea.

One nice thing about StackTach though, is that it doesn't have to run
inside the firewall or have direct db access. The server is multi-tenant
and can host multiple openstack deployments. Would this ability be lost
with Horizon integration?

-S


On 02/20/2012 07:40 PM, Devin Carlen wrote:
 Sandy, this is great work!  I think it would be worth integrating this
 into a view in Horizon for Folsom timeframe.
 
 
 Devin
 
 On Monday, February 20, 2012 at 12:15 PM, Sandy Walsh wrote:
 
 Hey!

 Last week I started on a little debugging tool for OpenStack based on
 AMQP events that I've been calling StackTach. It's really handy for
 watching the flow of an operation through the various parts of OpenStack.

 It consists of two parts:

 1. The Worker.

 Sits somewhere on your OpenStack network. It listens to AMQP monitor.*
 notifications and sends them to the StackTach server.

 (I need this branch to land for it to work ... hint hint)
 https://review.openstack.org/#change,4194

 2. The Web Interface

 Collects events via REST calls (poorman multi-tenant) and presents these
 events in a funky little web interface.

 You can play around with the UI here:
 http://darksecretsoftware.com/stacktach/1/
 (this is with data coming from my personal OpenStack Dev env)

 What do I do?

 Click on anything and you'll see the particulars in the Details window.
 Click on [+] to see the JSON for the event.
 Hosts shows the last 20 events that have a Host defined.
 Instances shows the last 20 events that have the Instance field
 populated.
 Hosts and Instances windows are resize-able.
 You may see duplication between both windows.
 Click on Time to see any events around that time (+/- 1 minute I think)

 Where is the code?

 The code is hosted below. There's LOTS of work to do to make it ready
 for prime-time ... but please, contribute.

 https://github.com/rackspace/stacktach

 How do I install it?

 I need to make this process cleaner. Right know you need to know how to
 create a Django Project and stick StackTach in there.

 Look forward to the feedback.

 Cheers,
 Sandy


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Announcing StackTach ...

2012-02-20 Thread Sandy Walsh
Hey!

Last week I started on a little debugging tool for OpenStack based on
AMQP events that I've been calling StackTach. It's really handy for
watching the flow of an operation through the various parts of OpenStack.

It consists of two parts:

1. The Worker.

Sits somewhere on your OpenStack network. It listens to AMQP monitor.*
notifications and sends them to the StackTach server.

(I need this branch to land for it to work ... hint hint)
https://review.openstack.org/#change,4194

2. The Web Interface

Collects events via REST calls (poorman multi-tenant) and presents these
events in a funky little web interface.

You can play around with the UI here:
http://darksecretsoftware.com/stacktach/1/
(this is with data coming from my personal OpenStack Dev env)

What do I do?

Click on anything and you'll see the particulars in the Details window.
Click on [+] to see the JSON for the event.
Hosts shows the last 20 events that have a Host defined.
Instances shows the last 20 events that have the Instance field
populated.
Hosts and Instances windows are resize-able.
You may see duplication between both windows.
Click on Time to see any events around that time (+/- 1 minute I think)

Where is the code?

The code is hosted below. There's LOTS of work to do to make it ready
for prime-time ... but please, contribute.

https://github.com/rackspace/stacktach

How do I install it?

I need to make this process cleaner. Right know you need to know how to
create a Django Project and stick StackTach in there.

Look forward to the feedback.

Cheers,
Sandy


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Remove Zones code - FFE

2012-02-12 Thread Sandy Walsh
+1 on shards / partitions / anything to avoid the naming confusion with 
availability zones.

(congrats on the branch! Looking forward to trying it!)

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Chris Behrens [cbehr...@codestud.com]
Sent: Sunday, February 12, 2012 6:50 PM
To: Leandro Reox
Cc: openstack@lists.launchpad.net; Chris Behrens
Subject: Re: [Openstack] Remove Zones code - FFE

Sorry, I'm late.  Really getting down to the wire here. :)

I've thrown up a version here: https://review.openstack.org/#change,4062

I've not functionally tested it yet, but there's really good test coverage for 
the zones service itself.   I also have added a test_compute_zones which tests 
that all of the compute tests pass while using the new ComputeZonesAPI class.

There's a couple bugs I note in the review and then I think I'm missing pushing 
some instance updates to the top in libvirt code.  And missing an update for 
instance deletes in the compute manager.  Going to hit those up today and 
finish this off.

One other comment:  It's been suggested we not call this stuff 'Zones' anymore. 
 It gets confused with availability zones and so forth.  Since this is really a 
way to shard nova, it has been suggested to call this 'Shards'. :)   Not sure I 
dig that name completely, although it makes sense.  Thoughts?

- Chris


On Feb 9, 2012, at 10:29 AM, Leandro Reox wrote:

 Awesome Chris !!!

 Lean

 On Thu, Feb 9, 2012 at 3:26 PM, Alejandro Comisario 
 alejandro.comisa...@mercadolibre.com wrote:
 Niceee !!

 Alejandro.

 On 02/09/2012 02:02 PM, Chris Behrens wrote:
 I should be pushing something up by end of day...  Even if it's not granted 
 an FFE, I'll have a need to keep my branch updated and working, so I should 
 at least always have a branch pushed up to a github account somewhere until 
 F1 opens up.  So, I guess worst case... there'll be a branch somewhere for 
 you to play with. :)

 - Chris


 On Feb 8, 2012, at 3:21 PM, Tom Fifield wrote:


 Just raising another deployment waiting on this new Zone implementation - 
 we currently have 2000 cores sitting idle in another datacentre that we can 
 use better if this is done.

 How can we help? ;)

 Regards,

 Tom

 On 02/08/2012 07:30 PM, Ziad Sawalha wrote:

 We were working on providing the necessary functionality in Keystone but
 stopped when we heard of the alternative solution. We could resume the
 conversation about what is needed on the Keystone side and implement if
 needed.

 Z

 From: Sandy Walsh 
 sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com
 
 Date: Thu, 2 Feb 2012 01:49:58 +
 To: Joshua McKenty 
 jos...@pistoncloud.com
 mailto:jos...@pistoncloud.com
 , Vishvananda Ishaya
 
 vishvana...@gmail.com mailto:vishvana...@gmail.com
 
 Cc: 
 openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 
 Subject: Re: [Openstack] Remove Zones code - FFE

 Understood, timing is everything. I'll let Chris talk about expected
 timing for the replacement. From a deployers side, nothing would really
 change, just some configuration options ... but a replacement should be
 available.

 I'm sure we could get it working pretty easily. The Keystone integration
 was the biggest pita.

 I can keep this branch fresh with trunk for when we're ready to pull the
 trigger.

 -S

 
 *From:* Joshua McKenty [
 jos...@pistoncloud.com
 mailto:jos...@pistoncloud.com
 ]
 *Sent:* Wednesday, February 01, 2012 4:45 PM
 *To:* Vishvananda Ishaya
 *Cc:* Sandy Walsh;
 openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net

 *Subject:* Re: [Openstack] Remove Zones code - FFE

 +1 to Vish's points. I know there are some folks coming online in the
 Folsom timeline that can help out with the new stuff, but this feels a
 bit like going backwards.

 --
 Joshua McKenty, CEO
 Piston Cloud Computing, Inc.
 w: (650) 24-CLOUD
 m: (650) 283-6846

 http://www.pistoncloud.com


 Oh, Westley, we'll never survive!
 Nonsense. You're only saying that because no one ever has.

 On Wednesday, February 1, 2012 at 12:41 PM, Vishvananda Ishaya wrote:


 I am all for pulling this out, but I'm a bit concerned with the fact
 that we have nothing to replace it with. There are some groups still
 trying to use it. MercadoLibre is trying to use it for example. I know
 you guys are trying to replace this with something better, but it
 would be nice not to break people for 7+ months


 So I guess I have some questions:
 1.a) is the current implementation completely broken?

 1.b) if yes, is it fixable

 2) If we do remove this, what can we tell people that need something
 like zones between now and the Folsom release?

 Vish
 On Feb 1, 2012, at 12:16 PM, Sandy Walsh wrote:


 As part of the new

Re: [Openstack] Propose to make Monsyne Dragon a nova core developer

2012-02-07 Thread Sandy Walsh
Yessir! +1



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Matt Dietz [matt.di...@rackspace.com]
Sent: Monday, February 06, 2012 6:48 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] Propose to make Monsyne Dragon a nova core developer

Hey guys,

Dragon has really stepped up lately on reviewing patches into Nova, and has a 
ton of knowledge around Nova proper, so I propose he be added to Nova core. I 
think he'd be a great addition to the team.

Matt
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Remove Zones code - FFE

2012-02-01 Thread Sandy Walsh
As part of the new (and optional) Zones code coming down the pipe, part of this 
is to remove the old Zones implementation. 

More info in the merge prop:
https://review.openstack.org/#change,3629

So, can I? can I? Huh? 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Remove Zones code - FFE

2012-02-01 Thread Sandy Walsh
Understood, timing is everything. I'll let Chris talk about expected timing for 
the replacement.  From a deployers side, nothing would really change, just some 
configuration options ... but a replacement should be available.

I'm sure we could get it working pretty easily. The Keystone integration was 
the biggest pita.

I can keep this branch fresh with trunk for when we're ready to pull the 
trigger.

-S


From: Joshua McKenty [jos...@pistoncloud.com]
Sent: Wednesday, February 01, 2012 4:45 PM
To: Vishvananda Ishaya
Cc: Sandy Walsh; openstack@lists.launchpad.net
Subject: Re: [Openstack] Remove Zones code - FFE

+1 to Vish's points. I know there are some folks coming online in the Folsom 
timeline that can help out with the new stuff, but this feels a bit like going 
backwards.

--
Joshua McKenty, CEO
Piston Cloud Computing, Inc.
w: (650) 24-CLOUD
m: (650) 283-6846
http://www.pistoncloud.com

Oh, Westley, we'll never survive!
Nonsense. You're only saying that because no one ever has.


On Wednesday, February 1, 2012 at 12:41 PM, Vishvananda Ishaya wrote:

I am all for pulling this out, but I'm a bit concerned with the fact that we 
have nothing to replace it with. There are some groups still trying to use it. 
MercadoLibre is trying to use it for example. I know you guys are trying to 
replace this with something better, but it would be nice not to break people 
for 7+ months


So I guess I have some questions:
1.a) is the current implementation completely broken?

1.b) if yes, is it fixable

2) If we do remove this, what can we tell people that need something like zones 
between now and the Folsom release?

Vish
On Feb 1, 2012, at 12:16 PM, Sandy Walsh wrote:

As part of the new (and optional) Zones code coming down the pipe, part of this 
is to remove the old Zones implementation.

More info in the merge prop:
https://review.openstack.org/#change,3629

So, can I? can I? Huh?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Essex dead wood cutting

2012-01-27 Thread Sandy Walsh
I'll be taking the existing Zones code out of API and Distributed Scheduler. 
The new Zones infrastructure is an optional component.

-S

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Thierry Carrez [thie...@openstack.org]
Sent: Friday, January 27, 2012 11:23 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] [Nova] Essex dead wood cutting

Just as Nova enters feature freeze, it sounds like a good moment to
consider removing deprecated, known-buggy-and-unmaintained or useless
feature code from the Essex tree.

Here are my suggestions for removal:

- Ajaxterm (unmaintained, security issues, replaced by VNC console)
- Hyper-V support (known broken and unmaintained)

I'm sure that everyone has suggestions on other dead wood that we should
cut now rather than ship in Essex... please comment.

--
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Scaling][Orchestration] Zone changes. WAS: [Question #185840]: Multi-Zone finally working on ESSEX but cant nova list (KeyError: 'uuid') + doubts

2012-01-26 Thread Sandy Walsh
Zones is going through some radical changes currently.

Specifically, we're planning to use direct Rabbit-to-Rabbit communication 
between trusted Zones to avoid the complication of changes to OS API, Keystone 
and novaclient.

To the user deploying Nova not much will change, there may be a new service to 
deploy (a Zones service), but that would be all. To a developer, the code in OS 
API will greatly simplify and the Distributed Scheduler will be able to focus 
on single zone scheduling (vs doing both zone and host scheduling as it does 
today). 

We'll have more details soon, but we aren't planning on introducing the new 
stuff until we have a working replacement in place. The default Essex Scheduler 
now will largely be the same and the filters/weight functions will still carry 
forward, so any investments there won't be lost. 

Stay tuned, we're hoping to get all this in a new blueprint soon.

Hope it helps,
Sandy


From: boun...@canonical.com [boun...@canonical.com] on behalf of Alejandro 
Comisario [question185...@answers.launchpad.net]
Sent: Thursday, January 26, 2012 8:50 AM
To: Sandy Walsh
Subject: Re: [Question #185840]: Multi-Zone finally working on ESSEX but cant   
nova list (KeyError: 'uuid') + doubts

Question #185840 on OpenStack Compute (nova) changed:
https://answers.launchpad.net/nova/+question/185840

Status: Answered = Open

Alejandro Comisario is still having a problem:
Sandy, Vish !

Thanks for the replies ! let me get to the relevant points.

#1 I totally agree with you guys, the policy for spawning instances
maybe very special of each company strategy, but, as you can pass from
Fill First to Spread First just adding a reverse=True on
nova.scheduler.least_cost.weighted_sum and
nova.scheduler.distributed_scheduler._schedule maybe its a harmless
addition to manipulate (since we are going to have a lot of zones across
datacenters, and many different departments are going to create many
instances to load-balance their applications, we really preffer
SpreadFirst to make sure hight availability of the pools )

#2 As we are going to test essex-3, i would like if you can tell me if
the zones code from Chris Behrens is going to be added on Final Essex /
Milestone 4, so we can keep testing other features, or you preffer us to
load this as a bug to be fixed since maybe the code that broke is not
going to have major changes.

Kindest regards !

--
You received this question notification because you are a member of Nova
Core, which is an answer contact for OpenStack Compute (nova).

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Scaling][Orchestration] Zone changes. WAS: [Question #185840]: Multi-Zone finally working on ESSEX but cant nova list (KeyError: 'uuid') + doubts

2012-01-26 Thread Sandy Walsh
Thanks Blake ... all very valid points.

Based on our discussions yesterday (the ink is still wet on the whiteboard) 
we've been kicking around numbers in the following ranges:

500-1000 hosts per zone (zone = single nova deployment. 1 db, 1 rabbit)
25-100 instances per host (minimum flavor)
3s api response time fully loaded (over that would be considered a failure). 
'nova list' being the command that can bring down the house. But also 'nova 
boot' is another concern. We're always trying to get more async operations in 
there.

Hosts per zone is a tricky one because we run into so many issues around 
network architecture, so your mileage may vary. Network is the limiting factor 
in this regard.

All of our design decisions are being made with these metrics in mind.

That said, we'd love to get more feedback on realistic metric expectations to 
ensure we're in the right church.

Hope this is what you're looking for?

-S



From: Blake Yeager [blake.yea...@gmail.com]
Sent: Thursday, January 26, 2012 12:13 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Scaling][Orchestration] Zone changes. WAS: [Question 
#185840]: Multi-Zone finally working on ESSEX but cant nova list (KeyError: 
'uuid') + doubts

Sandy,

I am excited to hear about the work that is going on around communication 
between trusted zones and look forward to seeing what you have created.

In general, the scalability of Nova is an area where I think we need to put 
additional emphasis.  Rackspace has done a lot of work on zones, but they don't 
seem to be receiving a lot of support from the rest of the community.

The OpenStack mission statement indicates the mission of the project is: To 
produce the ubiquitous Open Source cloud computing platform that will meet the 
needs of public and private cloud providers regardless of size, by being simple 
to implement and massively scalable.

I would challenge the community to ensure that scale is being given the 
appropriate focus in upcoming releases, especially Nova.  Perhaps we need to 
start by setting very specific scale targets for a single Nova zone in terms of 
nodes, instances, volumes, etc.  I did a quick search of the wiki but I didn't 
find anything about scale targets.  Does anyone know if something exists and I 
am just missing it?  Obviously scale will depend a lot on your specific 
hardware and configuration but we could start by saying with this minimum 
hardware spec and this configuration we want to be able to hit this scale.  
Likewise it would be nice to publish some statistics about the scale that we 
believe a given release can operate at safely.  This would tie into some of the 
QA/Testing work that Jay  team are working on.

Does anyone have other thoughts about how we ensure we are all working toward 
building a massively scalable system?

-Blake

On Thu, Jan 26, 2012 at 9:20 AM, Sandy Walsh 
sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote:
Zones is going through some radical changes currently.

Specifically, we're planning to use direct Rabbit-to-Rabbit communication 
between trusted Zones to avoid the complication of changes to OS API, Keystone 
and novaclient.

To the user deploying Nova not much will change, there may be a new service to 
deploy (a Zones service), but that would be all. To a developer, the code in OS 
API will greatly simplify and the Distributed Scheduler will be able to focus 
on single zone scheduling (vs doing both zone and host scheduling as it does 
today).

We'll have more details soon, but we aren't planning on introducing the new 
stuff until we have a working replacement in place. The default Essex Scheduler 
now will largely be the same and the filters/weight functions will still carry 
forward, so any investments there won't be lost.

Stay tuned, we're hoping to get all this in a new blueprint soon.

Hope it helps,
Sandy


From: boun...@canonical.commailto:boun...@canonical.com 
[boun...@canonical.commailto:boun...@canonical.com] on behalf of Alejandro 
Comisario 
[question185...@answers.launchpad.netmailto:question185...@answers.launchpad.net]
Sent: Thursday, January 26, 2012 8:50 AM
To: Sandy Walsh
Subject: Re: [Question #185840]: Multi-Zone finally working on ESSEX but cant   
nova list (KeyError: 'uuid') + doubts

Question #185840 on OpenStack Compute (nova) changed:
https://answers.launchpad.net/nova/+question/185840

   Status: Answered = Open

Alejandro Comisario is still having a problem:
Sandy, Vish !

Thanks for the replies ! let me get to the relevant points.

#1 I totally agree with you guys, the policy for spawning instances
maybe very special of each company strategy, but, as you can pass from
Fill First to Spread First just adding a reverse=True on
nova.scheduler.least_cost.weighted_sum and
nova.scheduler.distributed_scheduler._schedule maybe its a harmless
addition to manipulate (since we are going to have

Re: [Openstack] Proposal to limit decorator usage

2012-01-17 Thread Sandy Walsh
Seems kind of arbitrary doesn't it?

Perhaps something about not using decorators with arguments instead? (since 
they're the biggest wart on the ass of Python)

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Lorin Hochstein [lo...@nimbisservices.com]
Sent: Tuesday, January 17, 2012 12:09 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] Proposal to limit decorator usage

While going through merge proposal, I ran across this one from Mark 
Washenberger about limiting the number of decorators to 2 in nova code: 
https://review.openstack.org/2966

There's some good discussion on this in the proposal comments, but I thought it 
should hit the mailing list as well, in case folks wanted to weigh in but 
hadn't seen the proposal (I don't have a strong opinion pro or con here).

Here's the proposed addition for HACKING:

Decorators
--
A function or method should not have more than two decorators applied to it
where it is defined.

Decorators are a powerful feature of Python that can eliminate some repetitive
code. However, decorator usage can be more difficult to debug or maintain than
alternative approaches to reduce repetition. These difficulties multiply when
decorators are stacked on top of each other. To ensure judicious use, we
therefore limit decorator depth to no more than two.



Take care,

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.comhttps://www.nimbisservices.com/





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using fake drivers for testing

2012-01-16 Thread Sandy Walsh
I used the fake virt driver when I was testing zones. Very handy.



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Brebner, Gavin [gavin.breb...@hp.com]
Sent: Monday, January 16, 2012 11:18 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Using fake drivers for testing

I’m interested in running some “white-box” tests that check scalability and 
limits of parts of a Nova system.  I want to start from a
full working configuration however, as I get that from my dev team and it 
includes all sorts of settings that would be tedious
and error prone to reproduce in other ways. My thought was that good use could 
be made of some of the “fake” drivers
that are present – I’m hoping to be able to make some minimal code/config 
changes to swap the driver on my host node(s)
for a fake one, restart the services and run tests.

Any caveats on this that I should be aware of ? For that matter has anyone 
tried this kind of thing before ? I’d be interested in
hearing of your experiences if so.

Thanks,

Gavin


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Subteam Changes

2011-12-08 Thread Sandy Walsh
+1


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Soren Hansen [so...@linux2go.dk]
Sent: Thursday, December 08, 2011 6:09 AM
To: Vishvananda Ishaya
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: Re: [Openstack] Nova Subteam Changes

2011/12/7 Vishvananda Ishaya vishvana...@gmail.com:
 1) Weekly meeting for team leads. This is a time for us to discuss blueprint
 progress, multiple-team-related issues, etc. Going to shoot for Mondays at
 2100 for this one.  I really need the subteam leads to commit to making this
 meeting. We can discuss at the first meeting and decide if there is a better
 time for this to occur.

I have a conflict in that time slot every other week, and frankly, I'd
really like to avoid more meetings. I'd by far prefer keeping to
e-mail, and if we (over e-mail) find something that warrants more
real-time discussion, we can put it on the agenda for our regular
openstack team meeting on Tuesdays.

--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Orchestration] Handling error events ... explicit vs. implicit

2011-12-07 Thread Sandy Walsh
For orchestration (and now the scheduler improvements) we need to know when an 
operation fails ... and specifically, which resource was involved. In the 
majority of the cases it's an instance_uuid we're looking for, but it could be 
a security group id or a reservation id.

With most of the compute.manager calls the resource id is the third parameter 
in the call (after self  context), but there are some oddities. And sometimes 
we need to know the additional parameters (like a migration id related to an 
instance uuid). So simply enforcing parameter orders may be insufficient and 
impossible to enforce programmatically.

A little background:

In nova, exceptions are generally handled in the RPC or middleware layers as a 
logged event and life goes on. In an attempt to tie this into the notification 
system, a while ago I added stuff to the wrap_exception decorator. I'm sure 
you've seen this nightmare scattered around the code:
@exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())

What started as a simple decorator now takes parameters and the code has become 
nasty. 

But it works ... no matter where the exception was generated, the notifier gets:
*   compute.host_id
*   method name
*   and whatever arguments the method takes.

So, we know what operation failed and the host it failed on, but someone needs 
to crack the argument nut to get the goodies. It's a fragile coupling from 
publisher to receiver.

One, less fragile, alternative is to put a try/except block inside every 
top-level nova.compute.manager method and send meaningful exceptions right from 
the source. More fidelity, but messier code. Although explicit is better than 
implicit keeps ringing in my head.

Or, we make a general event parser that anyone can use ... but again, the link 
between the actual method and the parser is fragile. The developers have to 
remember to update both. 

Opinions?

-S








___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Orchestration] Handling error events ... explicit vs. implicit

2011-12-07 Thread Sandy Walsh
Sure, the problem I'm immediately facing is reclaiming resources from the 
Capacity table when something fails. (we claim them immediately in the 
scheduler when the host is selected to lessen the latency).

The other situation is Orchestration needs it for retries, rescheduling, 
rollbacks and cross-service timeouts.

I think it's needed core functionality. I like Fail-Fast for the same reasons, 
but it can get in the way.

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Mark Washenberger [mark.washenber...@rackspace.com]
Sent: Wednesday, December 07, 2011 11:53 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Orchestration] Handling error events ... explicit 
vs. implicit

Can you talk a little more about how you want to apply this failure 
notification? That is, what is the case where you are going to use the 
information that an operation failed? In my head I have an idea of getting code 
simplicity dividends from an everything succeeds approach to some of our 
operations. But it might not really apply to the case you're working on.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Orchestration] Handling error events ... explicit vs. implicit

2011-12-07 Thread Sandy Walsh
Exactly! ... or it could be handled in the notifier itself.


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Mark Washenberger [mark.washenber...@rackspace.com]
Sent: Wednesday, December 07, 2011 12:36 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Orchestration] Handling error events ... explicit 
vs. implicit

Gotcha.

So the way this might work is, for example, when a run_instance fails on 
compute node, it would publish a run_instance for uuid=blah failed event. 
There would be a subscriber associated with the scheduler listening for such 
events--when it receives one it would go check the capacity table and update it 
to reflect the failure. Does that sound about right?


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Orchestration] Handling error events ... explicit vs. implicit

2011-12-07 Thread Sandy Walsh
True ... this idea has come up before (and is still being kicked around). My 
biggest concern is what happens if that scheduler dies? We need a mechanism 
that can live outside of a single scheduler service. 

The more of these long-running processes we leave in a service the greater the 
impact when something fails. Shouldn't we let the queue provide the resiliency 
and not depend on the worker staying alive? Personally I'm not a fan of 
removing our synchronous nature.


From: Yun Mao [yun...@gmail.com]
Sent: Wednesday, December 07, 2011 1:03 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Orchestration] Handling error events ... explicit vs. 
implicit

Hi Sandy,

I'm wondering if it is possible to change the scheduler's rpc cast to
rpc call. This way the exceptions should be magically propagated back
to the scheduler, right? Naturally the scheduler can find another node
to retry or decide to give up and report failure. If we need to
provision many instances, we can spawn a few green threads for that.

Yun

On Wed, Dec 7, 2011 at 10:26 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 For orchestration (and now the scheduler improvements) we need to know when 
 an operation fails ... and specifically, which resource was involved. In the 
 majority of the cases it's an instance_uuid we're looking for, but it could 
 be a security group id or a reservation id.

 With most of the compute.manager calls the resource id is the third parameter 
 in the call (after self  context), but there are some oddities. And 
 sometimes we need to know the additional parameters (like a migration id 
 related to an instance uuid). So simply enforcing parameter orders may be 
 insufficient and impossible to enforce programmatically.

 A little background:

 In nova, exceptions are generally handled in the RPC or middleware layers as 
 a logged event and life goes on. In an attempt to tie this into the 
 notification system, a while ago I added stuff to the wrap_exception 
 decorator. I'm sure you've seen this nightmare scattered around the code:
 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())

 What started as a simple decorator now takes parameters and the code has 
 become nasty.

 But it works ... no matter where the exception was generated, the notifier 
 gets:
 *   compute.host_id
 *   method name
 *   and whatever arguments the method takes.

 So, we know what operation failed and the host it failed on, but someone 
 needs to crack the argument nut to get the goodies. It's a fragile coupling 
 from publisher to receiver.

 One, less fragile, alternative is to put a try/except block inside every 
 top-level nova.compute.manager method and send meaningful exceptions right 
 from the source. More fidelity, but messier code. Although explicit is 
 better than implicit keeps ringing in my head.

 Or, we make a general event parser that anyone can use ... but again, the 
 link between the actual method and the parser is fragile. The developers have 
 to remember to update both.

 Opinions?

 -S








 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Orchestration] Handling error events ... explicit vs. implicit

2011-12-07 Thread Sandy Walsh
*removing our Asynchronous nature.

(heh, such a key point to typo on)



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Sandy Walsh [sandy.wa...@rackspace.com]
Sent: Wednesday, December 07, 2011 1:55 PM
To: Yun Mao
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Orchestration] Handling error events ... explicit vs. 
implicit

True ... this idea has come up before (and is still being kicked around). My 
biggest concern is what happens if that scheduler dies? We need a mechanism 
that can live outside of a single scheduler service.

The more of these long-running processes we leave in a service the greater the 
impact when something fails. Shouldn't we let the queue provide the resiliency 
and not depend on the worker staying alive? Personally I'm not a fan of 
removing our synchronous nature.


From: Yun Mao [yun...@gmail.com]
Sent: Wednesday, December 07, 2011 1:03 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Orchestration] Handling error events ... explicit vs. 
implicit

Hi Sandy,

I'm wondering if it is possible to change the scheduler's rpc cast to
rpc call. This way the exceptions should be magically propagated back
to the scheduler, right? Naturally the scheduler can find another node
to retry or decide to give up and report failure. If we need to
provision many instances, we can spawn a few green threads for that.

Yun

On Wed, Dec 7, 2011 at 10:26 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 For orchestration (and now the scheduler improvements) we need to know when 
 an operation fails ... and specifically, which resource was involved. In the 
 majority of the cases it's an instance_uuid we're looking for, but it could 
 be a security group id or a reservation id.

 With most of the compute.manager calls the resource id is the third parameter 
 in the call (after self  context), but there are some oddities. And 
 sometimes we need to know the additional parameters (like a migration id 
 related to an instance uuid). So simply enforcing parameter orders may be 
 insufficient and impossible to enforce programmatically.

 A little background:

 In nova, exceptions are generally handled in the RPC or middleware layers as 
 a logged event and life goes on. In an attempt to tie this into the 
 notification system, a while ago I added stuff to the wrap_exception 
 decorator. I'm sure you've seen this nightmare scattered around the code:
 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())

 What started as a simple decorator now takes parameters and the code has 
 become nasty.

 But it works ... no matter where the exception was generated, the notifier 
 gets:
 *   compute.host_id
 *   method name
 *   and whatever arguments the method takes.

 So, we know what operation failed and the host it failed on, but someone 
 needs to crack the argument nut to get the goodies. It's a fragile coupling 
 from publisher to receiver.

 One, less fragile, alternative is to put a try/except block inside every 
 top-level nova.compute.manager method and send meaningful exceptions right 
 from the source. More fidelity, but messier code. Although explicit is 
 better than implicit keeps ringing in my head.

 Or, we make a general event parser that anyone can use ... but again, the 
 link between the actual method and the parser is fragile. The developers have 
 to remember to update both.

 Opinions?

 -S








 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] HPC with Openstack?

2011-12-02 Thread Sandy Walsh
I've recently had inquiries about High Performance Computing (HPC) on 
Openstack. As opposed to the Service Provider (SP) model, HPC is interested in 
fast provisioning, potentially short lifetime instances with precision metrics 
and scheduling. Real-time vs. Eventually.

Anyone planning on using Openstack in that way?

If so, I'll direct those inquires to this thread.

Thanks in advance,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Sandy Walsh
Good point ... thanks for the clarification.

-S


From: Lorin Hochstein [lo...@isi.edu]
Sent: Friday, December 02, 2011 9:47 AM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] HPC with Openstack?

As a side note, HPC means very different things to different people. In the 
circles I move in, HPC is interested in running compute jobs that are 
CPU-intensive, require large amounts of memory, and need 
low-latency/high-bandwidth interconnects to allow the user to break up a 
tightly coupled compute job across multiple nodes. A particular compute job 
will run for hours to days, so fast provisioning isn't necessarily critical 
(the traditional HPC model is to have your job wait in a batch queue until the 
resources are available).

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin




On Dec 2, 2011, at 7:17 AM, Sandy Walsh wrote:

I've recently had inquiries about High Performance Computing (HPC) on 
Openstack. As opposed to the Service Provider (SP) model, HPC is interested in 
fast provisioning, potentially short lifetime instances with precision metrics 
and scheduling. Real-time vs. Eventually.

Anyone planning on using Openstack in that way?

If so, I'll direct those inquires to this thread.

Thanks in advance,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Lorin Hochstein to join nova-core

2011-11-30 Thread Sandy Walsh
+1 ... good call!

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Vishvananda Ishaya [vishvana...@gmail.com]
Sent: Tuesday, November 29, 2011 2:03 PM
To: openstack (openstack@lists.launchpad.net)
Subject: [Openstack] Proposal for Lorin Hochstein to join nova-core

Lorin has been a great contributor to Nova for a long time and has been 
participating heavily in reviews over the past couple of months.  I think he 
would be a great addition to nova-core.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
Thanks Soren, I see what you're doing now and it makes perfect sense. It'll be 
a nice helper class.

My only snipe would be that mox is generic to any library and this fake only 
gives the benefit to db operations. We have to remember It's a db operation, 
so I have to do this. It's another method call so I need to do that

How much effort would it be to make it into a better/more generic mox library?

-S


From: Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 7:38 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

2011/11/22 Sandy Walsh sandy.wa...@rackspace.com:
 I suspect the problem is coming in with our definition of unit
 tests. I don't think a unit test should be calling out of the method
 being tested at all. So anything beyond stubbing out the methods
 within the method being tested seems like noise to me. What you're
 describing sounds more like integration tests.

If I'm testing a method that includes a call to the db api, the strategy
with which I choose to replace that call with a double does not change
whether the test is a unit test or not.

I'm simply replacing this:

def test_something(self):
self.mox.StubOutWithMock(db, 'instance_get')
db.instance_get(mox.IgnoreArg(), mox.IgnoreArg()
).AndReturn({'name': 'this or that',
 'instance_type_id': 42})

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

or this:

def test_something(self):
def fake_instance_get(context, instance_uuid):
return {'name': 'this or that',
'instance_type_id': 42}

self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

with this:

def test_something(self):
ctxt = _get_context()
db.instance_create(ctxt, {'name': 'this or that',
  'instance_type_id': 42})

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

Not only is this -- to my eye -- much more readable, but because the
fake db driver has been proven (by the db test suite) to give responses
that are exactly like what the real db driver would return, we have
better confidence in the output of the test. E.g. if the real db driver
always sets a particular attribute to a particular default value, it's
remarkably easy to forget to follow suit in an ad-hoc mock, and it's
even easier to forget to update the countless ad-hoc mocks later on, if
such a new attribute is added. This may or may not affect the tested
code's behaviour, but if that was easy to see/predict, we wouldn't need
tests to begin with :)

Over the course of this thread, I've heard many people raise concerns
about whether we'd really be testing the fake or testing the thing that
depends on the fake. I just don't get that at all. Surely a fake DB
driver that is proven to be true to its real counterpart should make us
*more* sure that we're testing our code correctly than an ad-hoc mock
whose correctness is very difficult to verify?

 I thought the motive of your thread was to create
 fast/small/readable/non-brittle/maintainable tests.

The motive was to gather testing related goals, action items, thoughts,
complaints, whatever. It just so happens that a lot of people (myself
included) think that speeding up the test suite and categorising tests
into true unit tests and everything else are important things to
look at.

 Integration tests, while important, make this goal difficult.

I agree. I'm very happy that there's a lot of people doing a lot of work
on the integration test suite so that I can focus more on unit tests. As
I think I've mentioned before, unit tests are really all we can expect
people to run.

 So, if we're both talking about real unit tests, I don't seen the
 benefit of the fake.

Please elaborate (with my above comments in mind).

 As for my example of 123 vs abc, that was a bad example. Let me
 rephrase ... in one test I may want to have an environment that has no
 pre-existing instances in the db. In another test I may want to have
 an environment with a hundred instances.

 I'd like to understand how configuring the fake for both of these
 scenarios will be any easier than just having a stub. It seems like an
 unnecessary abstraction.

First of all, the DB is blown away between each individual test, so we
don't have to worry about its initial state.

In the first scenario, I'd do nothing. I have a clean slate, so I'm good
to go. In the second scenario, I'd just do 100 calls to
db.instance_create.  With the mock approach, I'd write a custom
instance_get with a 100 if/elif clauses, returning whatever makes sense
for the given instance_id. Mind you, the objects that I return from

Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
:) yeah, you're completely misunderstanding me.

So, you've made a much better StubOutWithMock() and slightly better stubs.Set() 
by (essentially) ignoring the method parameter checks and just focusing on the 
return type. 

Using your example:

def test_something(self):
def fake_instance_get(context, instance_uuid):
return {'name': 'this or that',
'instance_type_id': 42}

self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

Could your library be expanded to allow:

def test_something(self):
self.sorens_mox.Set(nova.db, 'instance_get_by_uuid', {'name': 'this or 
that',
'instance_type_id': 42})
self.sorens_mox.Set(nova.my_module, 'get_list_of_things', range(10))

exercise_the_routine_that_will_eventually_do_an_instance_get_and_get_list()
verify_that_the_system_is_now_in_the_desired_state()

See what I mean?

Side note: 
I don't view tests that permit 
exercise_the_routine_that_will_eventually_do_an_instance_get()
calls to be unit tests ... they're integration tests and the source of all this 
headache in the first place.

A unit test should be
exercise_the_routine_that_will_directly_call_instance_get()

Hopefully we're saying the same thing on this last point?

-S

From: Soren Hansen [so...@linux2go.dk]

Am I completely misunderstanding you?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
I understand what you're proposing, but I'm backtracking a little. 
(my kingdom for you and a whiteboard in the same room :)

I think that you could have a hybrid of your
db.do_something(desired_return_value)
and 
self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)
(which I don't think is terrible other than requiring a nested method)

to produce: 
self.sorens_mox.Set(nova.db, 'instance_get_by_uuid', {'name': 'this or that',
'instance_type_id': 42})

which would work with things other than just the db.



---

 So, you've made a much better StubOutWithMock() and slightly better 
 stubs.Set() by (essentially) ignoring the method parameter checks and just 
 focusing on the return type.

No, no. Read my e-mail again. I don't want to do it that way either. I
showed two examples of what I'd like to get rid of, followed by what I'd
like to do instead.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
haha ... worse email thread ever. 

I'll catch you on IRC ... we've diverged too far to make sense.

-S

From: Soren Hansen [so...@linux2go.dk]
Sent: Wednesday, November 23, 2011 6:30 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

2011/11/23 Sandy Walsh sandy.wa...@rackspace.com:
 I understand what you're proposing, but I'm backtracking a little.
 (my kingdom for you and a whiteboard in the same room :)

Well, IRC would be a good start. :) I haven't seen you on IRC for days?

 I think that you could have a hybrid of your
 db.do_something(desired_return_value)

I may be reading too much into this, but this example suggests you're
not following me, to be honest.

db.instance_create is not a method I'm adding. It is an existing
method. It's the method you use to add an instance to the data store,
so it's not so much about passing it desired return values. It's
about adding an instance to the database in the exactly same fashion
as production could would have done it, thus allowing subsequent calls
to instance_get (or any of the ~30 other methods that return one or
more Instance objects) to return it appropriately. And by
appropriately, I mean: with all the same attributes as one of the real
db drivers would have returned.

 to produce:
 self.sorens_mox.Set(nova.db, 'instance_get_by_uuid', {'name': 'this or that',
'instance_type_id': 42})

We have this functionality today. My example:

   self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

was copied from one of our existing tests.

--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Sandy Walsh
Excellent!

I wrote a few blog posts recently, mostly based on my experience with openstack 
automated tests:

http://www.sandywalsh.com/2011/06/effective-units-tests-and-integration.html
http://www.sandywalsh.com/2011/08/pain-of-unit-tests-and-dynamically.html

Would love to see some of those changes make it in.

-Sandy


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Soren Hansen [so...@linux2go.dk]
Sent: Monday, November 21, 2011 8:24 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] [nova-testing] Efforts for Essex

Hi, guys.

We're scattered across enough different timezones to make real-time
communication really awkward, so let's see if we can get by using e-mail
instead.

A good test suite will let you keep up the pace of development. It will
offer confidence that your change didn't break any expectations, and
will help you understand how things are supposed to work, etc. A bad
test suite, on the other hand, will actually do the opposite, so the
quality of the unit test suite is incredibly important to the overall
health of the project. Integration tests are important as well, but unit
tests all we can expect people to run on a regular basis.

I'd like to start a bit of discussion around efforts around unit testing
for Essex. Think of it as brainstorming. Input can be anything from
small, actionable items, to broad ideas, to measurable goals, random
thoughts etc. Anything goes. At some point, we can distill this input to
a set of common themes, set goals, define action items, etc.

A few things from the back of my mind to get the discussion going:

= Speed up the test suite =

A slow test suite gets run much less frequently than a fast one.
Currently, when wrapped in eatmydata, a complete test run takes more
than 6 minutes.

Goal: We should get that down to less than one minute.

= Review of existing tests =
Our current tests have a lot of problems:

 * They overlap (multiple tests effectively testing the same code),
 * They're hard to understand. Not only are their intent not always
   clear, but it's often hard to tell how they're doing it.
 * They're slow.
 * They're interdependent. The failure of one test often cascades and
   makes others fail, too.
 * They're riddled with duplicated code.

I think it would be great if we could come up with some guidelines for
good tests and then go through the existing tests and highlight where
they violate these guidelines.

= Test coverage =
We should increase test coverage.

Adding tests to legacy code is hard. Generally, it's much easier to
write tests for new production code as you go along. The two primary
reasons are:

 * If you're writing tests and production code at the same time (or
   perhaps even writing the tests first), the code will almost
   automatically designed to be easily tested.
 * You (hopefully1) know how the code is supposed to work. This is not
   always obvious for existing code (often written by someone else).

Therefore, the most approachable strategy for increasing test coverage
is simply to ensure that any new code added is accompanied by tests, but
of course new tests for currently untested code is fantastically
welcome.

--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Sandy Walsh
I'm not a big fan of faking a database, not only for the reasons outlined 
already, but because it makes the tests harder to understand.

I much prefer to mock the db call on a per-unit-test basis so you can see 
everything you need in a single file. Yes, this could mean some duplication 
across test suites. But that is better than changes to the fake busting some 
other test that has different assumptions. 

Are we testing the code or are we testing the fake?

(real) Unit tests are our documentation and having to jump around to find out 
how the plumbing works doesn't make for good documentation.

-S

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 3:09 PM
To: Jay Pipes
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

Ok, this seems like a good time to repeat what I posted to nova-database
the other day.

tl;dr: I'm adding a fake DB driver as well as a DB test suite that we
can run against any of the backends to verify that they act the same.
This should address all the concerns I've heard so far.


Hi.

I just want to let you know that I'm working on a fake DB driver. The
two primary goals are to reduce the time it takes to run the test
suite (my results so far are very impressive) and simply to have
another, independent DB implementation. Once I'm done, I'll start
adding tests for it all, and finally, I'll take a stab at adding an
alternative, real DB backend.

In case you're wondering why I don't write the tests first, it's
simply because I don't know how all these things are supposed to work.
I hope to have a much better understanding of this once I've written
the fake DB driver, and then I'll add a generic test suite that should
be able to validate any DB backend.



--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Sandy Walsh
Yeah, email is making this tricky.

I suspect the problem is coming in with our definition of unit tests. I don't 
think a unit test should be calling out of the method being tested at all. So 
anything beyond stubbing out the methods within the method being tested seems 
like noise to me. What you're describing sounds more like integration tests. I 
thought the motive of your thread was to create 
fast/small/readable/non-brittle/maintainable tests. Integration tests, while 
important, make this goal difficult. So, if we're both talking about real unit 
tests, I don't seen the benefit of the fake.

As for my example of 123 vs abc, that was a bad example. Let me rephrase ... 
in one test I may want to have an environment that has no pre-existing 
instances in the db. In another test I may want to have an environment with a 
hundred instances. 

I'd like to understand how configuring the fake for both of these scenarios 
will be any easier than just having a stub. It seems like an unnecessary 
abstraction.

-S




From: Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 4:37 PM
To: Sandy Walsh
Cc: Jay Pipes; openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

2011/11/22 Sandy Walsh sandy.wa...@rackspace.com:
 I suppose there is a matter of preference here. I prefer to look in the 
 setup() and teardown() methods of my test suite to find out how everything 
 hangs together. Otherwise I have to check nova.TestCase when things break. 
 The closer my test can stay to my expectations from unittest.TestCase the 
 happier I am.

Sorry, I don't follow. The unit tests would use the fake db driver by
default. No per-test-specific setup necessary. Creating the instance
in the fake DB would happen explicitly in the individual tests (by way
of either calling db.instance_create directly, or by way of some
utility function).

 I can't comment on your fake db implementation, but my fear is this scenario:

 Test1 assumes db.create_foo() will return 123 and Test2 assumes it will 
 return abc. How do they both comfortably co-exist? And whatever the 
 mechanism, why is it better than just stubs.Set(db.create_foo, 
 _my_create_foo)?

I'm confused. That's *exactly* what I want to avoid. By everything
sharing the same fake db driver, you can never have one mock that
returns one style of response, and another mock that returns another
style of response.

 It's local and it makes sense in the context of that file.

But it has to make sense globally. If something you're testing only
ever sees an Instance object with a couple of hardcoded attributes
on it, because that's what its mock gives it, you'll never know if
it'll fail if it gets a more complete, real Instance object.

--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] python-novaclient moved into gerrit

2011-11-16 Thread Sandy Walsh
Thanks the James and the rest of the CI team, python-novaclient is now moved 
from github to gerrit.

Once we've migrated the old bugs over we'll nuke that repo.

You know the drill ... see ya there.

-S

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OSAPI and Zones

2011-11-14 Thread Sandy Walsh
Jorge is correct. The zones stuff was added before the API was finalized and 
before the extensions mechanism was in place. We simply haven't taken the time 
to convert it yet.

-S

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Anne Gentle [a...@openstack.org]
Sent: Monday, November 14, 2011 12:25 PM
To: Doude
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] OSAPI and Zones

Hi Édouard -

I believe zones are documented in the Developer docs in 
http://nova.openstack.org/devref/zone.html. The howto scenarios are 
documented using the nova python client only currently (probably due to the 
refactoring Jorge mentions).

When you run into these gaps, please do log a bug against either nova or 
openstack-manuals (http://bugs.launchpad.net/openstack-manuals). The nova-docs 
team is small and working through a backlog of such items, and bugs definitely 
help with prioritization and tracking.

Thanks,

Anne


On Mon, Nov 14, 2011 at 9:49 AM, Doude 
doudou...@gmail.commailto:doudou...@gmail.com wrote:
Hi all,

I'm trying to understand the multi-zone architecture of OpenStack.
I saw zone commands (list, show, select ...) have been added to the
OSAPI v1.1 (not as an extension but as a core component of the API)
but I cannot find any documentations in the OSAPI book:
http://docs.openstack.org/trunk/openstack-compute/developer/openstack-compute-api-1.1/content/

Where I can find this documentation ? In OpenStack wiki ? Where I can
open a bug about this lack of documentation ?

Regards,
Édouard.

___
Mailing list: 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Four compute-node, everytime the 1st and 2nd are choosen

2011-11-10 Thread Sandy Walsh
Are you using Diablo or Trunk?

If you're using trunk the default scheduler is MultiScheduler, which uses 
Chance scheduler. I think Diablo uses Chance by default?

--scheduler_driver

Unless you've explicitly selected the LeastCostScheduler (which only exists in 
Diablo now) I wouldn't worry about those settings.

Did you explicitly define a scheduler to use?


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Jorge Luiz Correa [corre...@gmail.com]
Sent: Thursday, November 10, 2011 6:27 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Four compute-node, everytime the 1st and 2nd are 
choosen

Is there a flag in nova.conf that permits we configure that? In documentation 
we can see that exists some algorithms used by scheduler. But, I don't know how 
to choose that one best fit our requirements.

Thanks!
:)

On Wed, Nov 9, 2011 at 1:36 PM, Ed Leafe 
ed.le...@rackspace.commailto:ed.le...@rackspace.com wrote:
On Nov 9, 2011, at 7:51 AM, Razique Mahroua wrote:

 I use the default scheduler, in fact, I've never tunned it really.
 The hypervisors all run KVM

   This is where the flag is defined in 
nova.scheduler.least_cost.pyhttp://nova.scheduler.least_cost.py:

 32 FLAGS = flags.FLAGS
 33 flags.DEFINE_list('least_cost_functions',
 34 ['nova.scheduler.least_cost.compute_fill_first_cost_fn'],
 35 'Which cost functions the LeastCostScheduler should use.')

   Since the default weighting function is 'compute_fill_first_cost_fn', 
which, as its name suggests, chooses hosts so as to fill up one host as much as 
possible before selecting another, the pattern you're seeing is expected. If 
you change that flag to 'nova.scheduler.noop_cost_fn', you should see the hosts 
selected randomly. The idea is that you can create your own weighting functions 
that will select potential hosts in a way that best fits your needs.


-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



--
- MSc. Correa, J.L.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Host Aggregates ...

2011-11-10 Thread Sandy Walsh
Ok, that helps ... now I see the abstraction your going for (a new layer under 
availability zones).

Personally I prefer a tagging approach to a modeled hierarchy. It was something 
we debated at great length with Zones. In this case, the tag would be in the 
capabilities assigned to the host.

I think both availability zones (and host aggregates) should be modeled using 
tags/capabilities without having to explicitly model it as a tree or in the db 
... which is how I see this evolving. At the scheduler level we should be able 
to make decisions using simple tag collections.

WestCoast, HasGPU, GeneratorBackup, PriorityNetwork

Are we saying the same thing?

Are there use cases that this approach couldn't handle?

-S


From: Armando Migliaccio [armando.migliac...@eu.citrix.com]
Sent: Thursday, November 10, 2011 8:50 AM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: RE: Host Aggregates ...

Hi Sandy,

Thanks for taking the time to read this.

My understanding is that a typical Nova deployment would span across multiple 
zones, that zones may have subzones, and that child zones will have a number of 
availability zones in them; please do correct me if I am wrong :)

That stated, it was assumed that an aggregate will be a grouping of servers 
within an availability zone (hence the introduction of the extra concept), and 
would be used to manage hypervisor pools when and if required. This introduces 
benefits like VM live migration, VM HA and zero-downtime host upgrades. The 
introduction of hypervisor pools is just the easy way to get these benefits in 
the short term.

Going back to your point, it is possible to match host-aggregates with 
single-zone that uses capabilities on the implementation level (assumed that 
it is okay to be unable to represent aggregates as children of availability 
zones). Nevertheless, I still see zones and aggregates as being different on 
the conceptual level.

What is your view if we went with the approach of implementing an aggregate as 
a special single-zone that uses capabilities? Would there be a risk of 
tangling the zone management API a bit?

Thanks for feedback!

Cheers,
Armando

 -Original Message-
 From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
 Sent: 09 November 2011 21:10
 To: Armando Migliaccio
 Cc: openstack@lists.launchpad.net
 Subject: Host Aggregates ...

 Hi Armando,

 I finally got around to reading
 https://blueprints.launchpad.net/nova/+spec/host-aggregates.

 Perhaps you could elaborate a little on how this differs from host
 capabilities (key-value pairs associated with a service) that the scheduler
 can use when making decisions?

 The distributed scheduler doesn't need zones to operate, but will use them if
 available. Would host-aggregates simply be a single-zone that uses
 capabilities?

 Cheers,
 Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Johannes Erdfelt to nova-core

2011-11-09 Thread Sandy Walsh
9 * 3 - 26



From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Brian Waldon [brian.wal...@rackspace.com]
Sent: Wednesday, November 09, 2011 10:02 AM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: [Openstack] Proposal to add Johannes Erdfelt to nova-core

I'd like to nominate Johannes for nova-core, as he has definitely been doing a 
good number of reviews lately.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to add Kevin Mitchell to nova-core

2011-11-09 Thread Sandy Walsh
3746 ^ 0

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Brian Waldon [brian.wal...@rackspace.com]
Sent: Wednesday, November 09, 2011 9:59 AM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: [Openstack] Proposal to add Kevin Mitchell to nova-core

Vek has absolutely stepped up and started doing quite few reviews, so I'd like 
to nominate him to be added to nova-core.

Waldon


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Host Aggregates ...

2011-11-09 Thread Sandy Walsh
Hi Armando,

I finally got around to reading 
https://blueprints.launchpad.net/nova/+spec/host-aggregates.

Perhaps you could elaborate a little on how this differs from host capabilities 
(key-value pairs associated with a service) that the scheduler can use when 
making decisions?

The distributed scheduler doesn't need zones to operate, but will use them if 
available. Would host-aggregates simply be a single-zone that uses capabilities?

Cheers,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Submitting code to Novaclient

2011-11-02 Thread Sandy Walsh
You're doing it right, we've all just been tied up with some production stuff.

I'll try and take some time this afternoon to clear the queue.

Sorry for the delay ... and thanks for the submissions!

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Gaurav Gupta [gauravgu...@gmail.com]
Sent: Wednesday, November 02, 2011 12:39 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] Submitting code to Novaclient

What's the process of submitting code to the novaclient project. Is it 
submitting pull request from Git or something like Gerrit? There are many pulls 
for novaclient that are waiting on GitHub for weeks.

If someone get a change, please review and merge the following:
https://github.com/rackspace/python-novaclient/pull/136
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Submitting code to Novaclient

2011-11-02 Thread Sandy Walsh
Since it was required for zone communications, the plan at the summit was to 
roll it into Nova under the ./contrib directory (and keep it lock-step with 
trunk). We could still push it to PyPi as needed.

Is this no longer the case?

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Monty Taylor [mord...@inaugust.com]
Sent: Wednesday, November 02, 2011 3:09 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Submitting code to Novaclient

Speaking of ... it's probably about time to get this folded in to the
regular infrastructure, since it's pretty important to the project.
Shall we set up a time to move it to the openstack org on github and add
it to gerrit/jenkins?

On 11/02/2011 12:23 PM, Sandy Walsh wrote:
 You're doing it right, we've all just been tied up with some production
 stuff.

 I'll try and take some time this afternoon to clear the queue.

 Sorry for the delay ... and thanks for the submissions!

 -S

 
 *From:* openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
 [openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on
 behalf of Gaurav Gupta [gauravgu...@gmail.com]
 *Sent:* Wednesday, November 02, 2011 12:39 PM
 *To:* openstack@lists.launchpad.net
 *Subject:* [Openstack] Submitting code to Novaclient

 What's the process of submitting code to the novaclient project. Is it
 submitting pull request from Git or something like Gerrit? There are
 many pulls for novaclient that are waiting on GitHub for weeks.

 If someone get a change, please review and merge the following:
 https://github.com/rackspace/python-novaclient/pull/136


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >