Hi, how are you? Me? Right as rain, thanks for asking.
Due to a recent reorg of the Rackspace github repo, StackTach and Stacky
are now moved under the rackerlabs organization.
The new coordinates are:
https://github.com/rackerlabs/stacktach
https://github.com/rackerlabs/stacky
Please update
working towards that. We just had some internal
tactical stuff we had to deal with. Getting everyone freed up now.
Cheers!
-S
Best,
-jay
On 03/25/2013 09:42 AM, Sandy Walsh wrote:
Hi, how are you? Me? Right as rain, thanks for asking.
Due to a recent reorg of the Rackspace github repo
Hey!
We just added a feature to StackTach for generating 24-hr summary reports.
The output can be seen here:
https://gist.github.com/SandyWalsh/4946226
It will identify requests failures, with timing information, by major
actions (create, resize, rescue, delete, snapshot, etc) by image type.
From: Doug Hellmann [doug.hellm...@dreamhost.com]
Sent: Thursday, November 08, 2012 1:54 PM
To: Sandy Walsh
Cc: Eoghan Glynn; OpenStack Development Mailing List;
openstack@lists.launchpad.net
Subject: Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified
Hey!
(sorry for the top-posting, crappy web client)
There is a periodic task already in the compute manager that can handle this:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3021
There seems to be some recent (to me) changes in the manager now wrt the
to :
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
I think the notification system is what you really want.
http://wiki.openstack.org/SystemUsageData
Sandy Walsh made some
I wrote a couple of articles on this:
http://www.sandywalsh.com/2012/04/openstack-nova-internals-pt1-overview.html
http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html
and, to a lesser extent:
http://www.sandywalsh.com/2012/10/debugging-openstack-with-stacktach-and.html
Hey!
Here's a first pass at a proposal for unifying StackTach/Ceilometer and other
instrumentation/metering/monitoring efforts.
It's v1, so bend, spindle, mutilate as needed ... but send feedback!
http://wiki.openstack.org/UnifiedInstrumentationMetering
Thanks,
Sandy
with different sinks.
Hmm, really not a fan of the decorator approach. Makes the code really ugly.
They'd be everywhere.
Not sure if I can get over it. :D
-S
On Nov 1, 2012, at 1:17 PM, Sandy Walsh wrote:
Hey!
Here's a first pass at a proposal for unifying StackTach/Ceilometer and other
Hey!
As promised at the summit the latest changes to StackTach are up on github.
This is a major change from the original StackTach I introduced earlier this
year (and left to wither). Also, I'm including Stacky, which is a new command
line tool for StackTach.
Here's a video that explains
As with most services there are two queues, but the scheduler has one extra:
1. The general round-robin queue. Any worker of that class can process the
event. But, only one worker will handle the event.
2. The specific worker queue. Used when I want an event to go to a specific
worker, for
Ugh, I just realized I have a conflict. Can we push it an hour later? (sorry!)
-S
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Annie Cheng
np ... I realize it's tricky to change this stuff.
I'll see if I can jump on remote.
-S
From: Annie Cheng [ann...@yahoo-inc.com]
Sent: Monday, October 29, 2012 1:44 PM
To: 'doug.hellm...@dreamhost.com'; Sandy Walsh
Cc: 'openstack@lists.launchpad.net'
Subject: Re
Hey y'all,
Great to chat during the summit last week, but it's been a crazy few days of
catch-up since then.
The main takeaway for me was the urgent need to get some common libraries under
these efforts.
So, to that end ...
1. To those that asked, I'm going to get my slides / video
+1
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Joe Savak [joe.sa...@rackspace.com]
Sent: Tuesday, October 16, 2012 5:50 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] (no
... is Wednesday 11:50 in 'Maggie' room.
If you're interested in debugging tools and performance monitoring ... see you
there!
-Sandy
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
4:47 AM
To: Sandy Walsh
Subject: RE: Versioning for notification messages
Hi Sandy,
Are you making changes to StackTach to keep up with the notification system, or
changes to the notification system to make StackTach even better ?
Cheers,
Phil
-Original Message-
From: Sandy Walsh
Hey Howard,
Queues are generally in memory, but you may turn on persistent (disk) queues in
your environment. So that's your limitation. Having rabbitmq on a different
server is a good idea.
Also, Queues are only used for control, not user data, so they shouldn't be
that big of a burden.
Went a little crazy on this one ... please let me know about the many blatant
mistakes so I can edit quickly :)
http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html
Thanks!
-S
___
Mailing list: https://launchpad.net/~openstack
Not sure if an email like this gets sent automagically by LP anymore, so here
goes ...
I'd love to get some feedback on it.
Thanks!
-S
The Blueprint
https://blueprints.launchpad.net/nova/+spec/monitoring-service
The Spec
http://wiki.openstack.org/PerformanceMonitoring
The Branch:
Perhaps off topic, but ...
One of the things I've noticed at the last couple of summits are the number of
new attendees that could really use an OpenStack 101 session. Many of them are
on fact-finding missions and their understanding of the architecture is
10,000'+.
Usually when conf's get
Sorry George, couldn't resist. :)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
haha ... oh no.
It's joke. Long story.
Not going to happen :)
From: chaohua wang [chwang...@gmail.com]
Sent: Friday, August 10, 2012 5:31 PM
To: Sandy Walsh
Subject: Re: [Openstack] Removing support for KVM Hypervisor ...
Hi I posted my question to
openstack
to check the stats in HostState class as exists in memory?
cc'ing Sandy Walsh, who is vastly more familiar with the scheduler than
I am :) Sandy, see Heng's question above... seems like a great question
to me -- and also a possible mini-project for someone to work on that
would add a scheduler
Ab-so-lutely! +1
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Vishvananda Ishaya [vishvana...@gmail.com]
Sent: Wednesday, July 18, 2012 8:10 PM
To:
US Only ... booo!
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
John Purrier [j...@openstack.org]
Sent: Tuesday, July 17, 2012 11:34 AM
To:
Hi
That's been my focus for a while now. We're using Tach to instrument openstack,
quantum, glance and a bunch of other components.
https://github.com/ohthree/tach
Also, there's StackTach which will consume the notifications and give you a
real-time display of what's happening in the system
Hi!
You would want to report this information to the Scheduler (probably via the
db) so it can make more informed decisions. A new Weight Function in the
scheduler would be the place to add it specifically.
We currently track the number of VM I/O operations being performed on each
Compute
candidates.
Cheers,
Sandy
From: Diego Parrilla [diego.parrilla.santama...@gmail.com]
Sent: Thursday, May 24, 2012 2:01 PM
To: Sandy Walsh
Cc: sarath zacharia; Nagaraju Bingi; openstack@lists.launchpad.net
Subject: Re: [Openstack] Where to add the performance increasing method in
openstack
Hi Sandy
Hi Yun,
I like the direction you're going with this. Unifying these three enums would
be a great change. Honestly it's really just combing two enums (vm task) and
using power state as a tool for reconciliation (actual != reality).
Might I suggest using graphvis instead of a spreadsheet? That
Agreed. That's largely the effort of the Orchestration group.
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Joshua Harlow [harlo...@yahoo-inc.com]
Sent:
PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Compute State Machine diagram ... (orchestration?
docs?)
Hi Sandy:
On May 2, 2012, at 12:10 PM, Sandy Walsh wrote:
Here's a little diagram I did up this morning for the required vm_state /
task_state transitions
Here's a little diagram I did up this morning for the required vm_state /
task_state transitions for compute api operations.
http://dl.dropbox.com/u/166877/PowerStates.pdf
Might be useful to the orchestration effort (or debugging in general)
Cheers,
Sandy
. Simple and
Change will turn to filters/weights soon. Depends on your installation.
-Sandy
From: Joshua Harlow [harlo...@yahoo-inc.com]
Sent: Thursday, April 26, 2012 5:07 PM
To: Sandy Walsh; openstack
Subject: Re: [Openstack] Question on notifications
Thx.
With these messages, instead
You want these events:
scheduler.run_instance.start (generated when scheduling begins)
scheduler.run_instance.scheduled (when a host is selected. one per instance)
scheduler.run_instance.end (all instances placed)
The .scheduled event will have the target hostname in it in the
weighted_host key
I think we have support for this currently in some fashion, Dragon?
-S
On 04/24/2012 12:55 AM, Loic Dachary wrote:
Metering needs to account for the volume of data sent to external network
destinations ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or
the disk I/O etc. This
Due to the redirect nature of the auth system we may need JSONP support
for this to work.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help :
are
mandatory.
-S
Nick
On Tue, Apr 24, 2012 at 8:57 PM, Sandy Walsh sandy.wa...@rackspace.com
mailto:sandy.wa...@rackspace.com wrote:
Due to the redirect nature of the auth system we may need JSONP support
for this to work
Flavor information is copied to the Instance table on creation so the
Flavors can change and still be tracked in the Instance. It may just
need to be sent in the notification payload.
The current events in the system are documented here:
http://wiki.openstack.org/SystemUsageData
-Sandy
On
StackTach is a Django-based web interface for capturing, displaying and
navigating OpenStack notifications
https://github.com/rackspace/stacktach
-S
On 04/23/2012 04:26 PM, Luis Gervaso wrote:
Joshua,
I have performed a create instance operation and here is an example data
obtained from
Sounds awesome! Looking forward to Draft Changes, much needed.
-S
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
James E. Blair [cor...@inaugust.com]
/me wears black armband.
Would love to see DirectApi v2 be the de facto OS API implementation.
The spec could be the same, it's just a (better) implementation detail.
-S
On 04/09/2012 03:58 PM, Vishvananda Ishaya wrote:
+1 to removal. I just tested to see if it still works, and due to our
forward to seeing that larger project!
-S
From: Ziad Sawalha
Sent: Friday, April 06, 2012 4:53 PM
To: Sriram Subramanian; Dugger, Donald D; Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re: [Openstack] [Nova
From what I've seen Spliff doesn't specify ... the containing application has
to deal with persistence.
-S
From: Yun Mao [yun...@gmail.com]
Sent: Friday, April 06, 2012 5:38 PM
To: Ziad Sawalha
Cc: Sriram Subramanian; Dugger, Donald D; Sandy Walsh
Can't wait to hear about it Ziad!
Very cool!
-S
From: Ziad Sawalha
Sent: Tuesday, April 03, 2012 6:56 PM
To: Sriram Subramanian; Dugger, Donald D; Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re
named Carrot that uses amqplib. Would it be better to use it
instead?
Em 27 de março de 2012 12:42, Sandy Walsh sandy.wa...@rackspace.com
mailto:sandy.wa...@rackspace.com escreveu:
I believe '.exists' is sent via a periodic update in
compute.manager
caching stuff is easy-peasy, making sure it is invalidated on all
servers in all conditions, not so easy...
On 3/22/12 4:26 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
We're doing tests to find out where the bottlenecks are, caching is the
most obvious solution, but there may
in
all conditions, not so easy...
On 3/22/12 4:26 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
We're doing tests to find out where the bottlenecks are, caching is the
most obvious solution, but there may be others. Tools like memcache do a
really good job of sharing memory across servers so
into a general (albeit
obvious) solution such as caching.
Sandy Walsh sandy.wa...@rackspace.com said:
We're doing tests to find out where the bottlenecks are, caching is the
most obvious solution, but there may be others. Tools like memcache do a
really good job of sharing memory across servers so
...
On 3/22/12 4:26 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
We're doing tests to find out where the bottlenecks are, caching is the
most obvious solution, but there may be others. Tools like memcache do a
really good job of sharing memory across servers so we don't have
@lists.launchpad.net
[mailto:openstack-
bounces+gabe.westmaas=rackspace@lists.launchpad.net] On Behalf Of
Sandy Walsh
Sent: Friday, March 23, 2012 7:58 AM
To: Joshua Harlow
Cc: openstack
Subject: Re: [Openstack] Caching strategies in Nova ...
Was reading up some more on cache invalidation
You can. The sanctioned approach is to use Yagi with a feed into
something like PubSubHubBub that lives on the public interweeb.
It's just an optional component.
-S
On 03/23/2012 12:20 PM, Kevin L. Mitchell wrote:
On Fri, 2012-03-23 at 13:43 +, Gabe Westmaas wrote:
However, I kind of
Ugh (reply vs reply-all again)
On 03/23/2012 02:58 PM, Joshua Harlow wrote:
Right,
Lets fix the problem, not add a patch that hides the problem.
U can’t put lipstick on a pig, haha. Its still a pig...
When stuff is expensive to compute, caching is the only option (yes?).
Whether that
Was the db on a separate server or loopback?
On 03/23/2012 05:26 PM, Mark Washenberger wrote:
Johannes Erdfelt johan...@erdfelt.com said:
MySQL isn't exactly slow and Nova doesn't have particularly large
tables. It looks like the slowness is coming from the network and how
many queries
.
I just see a ton of specific performance problems that are easier
to address one by one, rather than diving into a general (albeit
obvious) solution such as caching.
Sandy Walsh sandy.wa...@rackspace.com
mailto:sandy.wa...@rackspace.com said
o/
Vek and myself are looking into caching strategies in and around Nova.
There are essentially two approaches: in-process and external (proxy).
The in-process schemes sit in with the python code while the external
ones basically proxy the the HTTP requests.
There are some obvious pro's and
We're doing tests to find out where the bottlenecks are, caching is the
most obvious solution, but there may be others. Tools like memcache do a
really good job of sharing memory across servers so we don't have to
reinvent the wheel or hit the db at all.
In addition to looking into caching
Availability Zone is an EC2 concept. Zones were a sharding scheme for Nova.
Zones are being renamed to Cells to avoid further confusion. Availability Zones
will remain the same.
Hope it helps!
-S
From:
http://wiki.openstack.org/XenServer/Development#Legacy_way_to_Prepare_XenServer
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Eduardo Nunes
Thanks ... I'll have a look!
-S
On 02/21/2012 04:57 PM, Cole wrote:
very cool. If there is any interest in extending the tool and making it
pluggable to work with other wire protocols i'd think the openmama
http://www.openmama.org/ project would be an interesting possibility.
nice work!
this ability be lost
with Horizon integration?
-S
On 02/20/2012 07:40 PM, Devin Carlen wrote:
Sandy, this is great work! I think it would be worth integrating this
into a view in Horizon for Folsom timeframe.
Devin
On Monday, February 20, 2012 at 12:15 PM, Sandy Walsh wrote:
Hey!
Last
Hey!
Last week I started on a little debugging tool for OpenStack based on
AMQP events that I've been calling StackTach. It's really handy for
watching the flow of an operation through the various parts of OpenStack.
It consists of two parts:
1. The Worker.
Sits somewhere on your OpenStack
were working on providing the necessary functionality in Keystone but
stopped when we heard of the alternative solution. We could resume the
conversation about what is needed on the Keystone side and implement if
needed.
Z
From: Sandy Walsh
sandy.wa...@rackspace.com
mailto:sandy.wa
Yessir! +1
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Matt Dietz [matt.di...@rackspace.com]
Sent: Monday, February 06, 2012 6:48 PM
To:
As part of the new (and optional) Zones code coming down the pipe, part of this
is to remove the old Zones implementation.
More info in the merge prop:
https://review.openstack.org/#change,3629
So, can I? can I? Huh?
___
Mailing list:
integration was
the biggest pita.
I can keep this branch fresh with trunk for when we're ready to pull the
trigger.
-S
From: Joshua McKenty [jos...@pistoncloud.com]
Sent: Wednesday, February 01, 2012 4:45 PM
To: Vishvananda Ishaya
Cc: Sandy Walsh; openstack
I'll be taking the existing Zones code out of API and Distributed Scheduler.
The new Zones infrastructure is an optional component.
-S
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
blueprint soon.
Hope it helps,
Sandy
From: boun...@canonical.com [boun...@canonical.com] on behalf of Alejandro
Comisario [question185...@answers.launchpad.net]
Sent: Thursday, January 26, 2012 8:50 AM
To: Sandy Walsh
Subject: Re: [Question #185840]: Multi
this is what you're looking for?
-S
From: Blake Yeager [blake.yea...@gmail.com]
Sent: Thursday, January 26, 2012 12:13 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Scaling][Orchestration] Zone changes. WAS: [Question
#185840
Seems kind of arbitrary doesn't it?
Perhaps something about not using decorators with arguments instead? (since
they're the biggest wart on the ass of Python)
-S
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
I used the fake virt driver when I was testing zones. Very handy.
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Brebner, Gavin [gavin.breb...@hp.com]
Sent:
+1
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Soren Hansen [so...@linux2go.dk]
Sent: Thursday, December 08, 2011 6:09 AM
To: Vishvananda Ishaya
Cc:
For orchestration (and now the scheduler improvements) we need to know when an
operation fails ... and specifically, which resource was involved. In the
majority of the cases it's an instance_uuid we're looking for, but it could be
a security group id or a reservation id.
With most of the
Sure, the problem I'm immediately facing is reclaiming resources from the
Capacity table when something fails. (we claim them immediately in the
scheduler when the host is selected to lessen the latency).
The other situation is Orchestration needs it for retries, rescheduling,
rollbacks and
Exactly! ... or it could be handled in the notifier itself.
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Mark Washenberger
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [Orchestration] Handling error events ... explicit vs.
implicit
Hi Sandy,
I'm wondering if it is possible to change the scheduler's rpc cast to
rpc call. This way the exceptions should be magically propagated back
*removing our Asynchronous nature.
(heh, such a key point to typo on)
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Sandy Walsh [sandy.wa
I've recently had inquiries about High Performance Computing (HPC) on
Openstack. As opposed to the Service Provider (SP) model, HPC is interested in
fast provisioning, potentially short lifetime instances with precision metrics
and scheduling. Real-time vs. Eventually.
Anyone planning on using
Good point ... thanks for the clarification.
-S
From: Lorin Hochstein [lo...@isi.edu]
Sent: Friday, December 02, 2011 9:47 AM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] HPC with Openstack?
As a side note, HPC means very different
+1 ... good call!
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Vishvananda Ishaya [vishvana...@gmail.com]
Sent: Tuesday, November 29, 2011 2:03 PM
To:
method call so I need to do that
How much effort would it be to make it into a better/more generic mox library?
-S
From: Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 7:38 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re
:) yeah, you're completely misunderstanding me.
So, you've made a much better StubOutWithMock() and slightly better stubs.Set()
by (essentially) ignoring the method parameter checks and just focusing on the
return type.
Using your example:
def test_something(self):
def
I understand what you're proposing, but I'm backtracking a little.
(my kingdom for you and a whiteboard in the same room :)
I think that you could have a hybrid of your
db.do_something(desired_return_value)
and
self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)
(which I don't
haha ... worse email thread ever.
I'll catch you on IRC ... we've diverged too far to make sense.
-S
From: Soren Hansen [so...@linux2go.dk]
Sent: Wednesday, November 23, 2011 6:30 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re
Excellent!
I wrote a few blog posts recently, mostly based on my experience with openstack
automated tests:
http://www.sandywalsh.com/2011/06/effective-units-tests-and-integration.html
http://www.sandywalsh.com/2011/08/pain-of-unit-tests-and-dynamically.html
Would love to see some of those
I'm not a big fan of faking a database, not only for the reasons outlined
already, but because it makes the tests harder to understand.
I much prefer to mock the db call on a per-unit-test basis so you can see
everything you need in a single file. Yes, this could mean some duplication
across
the fake for both of these scenarios
will be any easier than just having a stub. It seems like an unnecessary
abstraction.
-S
From: Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 4:37 PM
To: Sandy Walsh
Cc: Jay Pipes; openstack
Thanks the James and the rest of the CI team, python-novaclient is now moved
from github to gerrit.
Once we've migrated the old bugs over we'll nuke that repo.
You know the drill ... see ya there.
-S
___
Mailing list:
Jorge is correct. The zones stuff was added before the API was finalized and
before the extensions mechanism was in place. We simply haven't taken the time
to convert it yet.
-S
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
Are you using Diablo or Trunk?
If you're using trunk the default scheduler is MultiScheduler, which uses
Chance scheduler. I think Diablo uses Chance by default?
--scheduler_driver
Unless you've explicitly selected the LeastCostScheduler (which only exists in
Diablo now) I wouldn't worry
, HasGPU, GeneratorBackup, PriorityNetwork
Are we saying the same thing?
Are there use cases that this approach couldn't handle?
-S
From: Armando Migliaccio [armando.migliac...@eu.citrix.com]
Sent: Thursday, November 10, 2011 8:50 AM
To: Sandy Walsh
Cc
9 * 3 - 26
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Brian Waldon [brian.wal...@rackspace.com]
Sent: Wednesday, November 09, 2011 10:02 AM
To:
3746 ^ 0
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Brian Waldon [brian.wal...@rackspace.com]
Sent: Wednesday, November 09, 2011 9:59 AM
To:
Hi Armando,
I finally got around to reading
https://blueprints.launchpad.net/nova/+spec/host-aggregates.
Perhaps you could elaborate a little on how this differs from host capabilities
(key-value pairs associated with a service) that the scheduler can use when
making decisions?
The
You're doing it right, we've all just been tied up with some production stuff.
I'll try and take some time this afternoon to clear the queue.
Sorry for the delay ... and thanks for the submissions!
-S
From:
Speaking of ... it's probably about time to get this folded in to the
regular infrastructure, since it's pretty important to the project.
Shall we set up a time to move it to the openstack org on github and add
it to gerrit/jenkins?
On 11/02/2011 12:23 PM, Sandy Walsh wrote:
You're doing
I'm hoping to land this branch asap.
https://review.openstack.org/#change,1192
It replaces all the kind of alike schedulers with a single
DistributedScheduler.
-S
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
, it is indeed possible to
generate the WADL by introspecting code. (with a few decorators/annotations
assisting)
This is what Sandy Walsh is suggesting, and I highly, highly recommend this
approach. Otherwise you have to either generate code from an external WADL,
which makes the code
+1 Dragon
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Monsyne Dragon [mdra...@rackspace.com]
Sent: Thursday, October 27, 2011 4:14 PM
To: George
As discussed at the summit, I agree there should be some form of IDL (WADL
being the likely candidate for REST), I think manually crafting/maintaining a
WADL (or XML in general) is a fools errand. This stuff is made for machine
consumption and should be machine generated. Whatever solution we
1 - 100 of 219 matches
Mail list logo