-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com]
Sent: 19 August 2014 17:50
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource
Tracking
On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
On the other
:31AM -0400, Jay Pipes wrote:
On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
On 08/20/2014 08:27 AM, Joe Gordon wrote:
On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
mailto:philip@hp.com wrote:
-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com
Adding in such case more bureaucracy (specs) is not the best way to resolve
team throughput issues...
I’d argue that if fundamental design disagreements can be surfaced and debated
at the design stage rather than first emerging on patch set XXX of an
implementation, and be used to then
Needing 3 out of 19 instead of 3 out of 20 isn't an order of magnatude
according to my calculator. Its much closer/fairer than making it 2/19 vs
3/20.
If a change is borderline in that it can only get 2 other cores maybe it
doesn't have a strong enough case for an exception.
Phil
Sent
-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com]
Sent: 03 September 2014 10:50
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for
Juno
snip
I will follow up with a more detailed email about what I
One final note: the specs referenced above didn't get approved until
Spec Freeze, which seemed to leave me with less time to implement
things. In fact, it seemed that a lot of specs didn't get approved
until spec freeze. Perhaps if we had more staggered approval of
specs, we'd have
Hi Daniel,
Thanks for putting together such a thoughtful piece - I probably need to
re-read it few times to take in everything you're saying, but a couple of
thoughts that did occur to me:
- I can see how this could help where a change is fully contained within a virt
driver, but I wonder
Hi,
I'd like to ask for a FFE for the 3 patchsets that implement quotas for server
groups.
Server groups (which landed in Icehouse) provides a really useful anti-affinity
filter for scheduling that a lot of customers woudl like to use, but without
some form of quota control to limit the
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 05 September 2014 11:49
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out
virt drivers
On 09/05/2014 03:02 AM, Sylvain Bauza wrote:
Ahem,
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] FFE server-group-quotas
On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
2014-09-05 21:56 GMT+09:00 Day, Phil philip@hp.com:
Hi,
I'd like to ask for a FFE for the 3 patchsets that implement quotas for
server
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 12 September 2014 19:37
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Expand resource name allowed
characters
Had to laugh about the PILE OF POO character :) Comments inline...
Can we
-Original Message-
From: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
Sent: 18 September 2014 02:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] are we going to remove the novaclient
v3 shell or what?
-Original
DevStack doesn't register v2.1 endpoint to keytone now, but we can use
it with calling it directly.
It is true that it is difficult to use v2.1 API now and we can check
its behavior via v3 API instead.
I posted a patch[1] for registering v2.1 endpoint to keystone, and I confirmed
Hi Folks,
I'd like to get some opinions on the use of pairs of notification messages for
simple events. I get that for complex operations on an instance (create,
rebuild, etc) a start and end message are useful to help instrument progress
and how long the operations took.However we also
, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:
Hi Folks,
I'd like to get some opinions on the use of pairs of notification
messages for simple events. I get that for complex operations on
an instance (create, rebuild, etc) a start and end message are useful
to help instrument
I think we should aim to /always/ have 3 notifications using a pattern
of
try:
...notify start...
...do the work...
...notify end...
except:
...notify abort...
Precisely my viewpoint as well. Unless we standardize on the above, our
Hi Cores,
The Stop, Rescue, and Delete should give guest a chance to shutdown change
https://review.openstack.org/#/c/35303/ was approved a couple of days ago, but
failed to merge because the RPC version had moved on. Its rebased and sitting
there with one +2 and a bunch of +1s -would be
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] All I want for Christmas is one more +2
...
On 12/12/2013 09:22 AM, Day, Phil wrote:
Hi Cores,
The Stop, Rescue, and Delete should give guest a chance to shutdown
change https://review.openstack.org/#/c/35303
+1, I would make the 14:00 meeting. I often have good intention of making the
21:00 meeting, but it's tough to work in around family life
Sent from Samsung Mobile
Original message
From: Joe Gordon joe.gord...@gmail.com
Date:
To: OpenStack Development Mailing List
Hi Folks,
I know it may seem odd to be arguing for slowing down a part of the review
process, but I'd like to float the idea that there should be a minimum review
period for patches that change existing functionality in a way that isn't
backwards compatible.
The specific change that got me
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 29 December 2013 05:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] minimum review period for functional
changes that break backwards compatibility
On 29 December 2013 04:21, Day, Phil philip@hp.com wrote:
Hi Folks,
I know it may seem odd to be arguing for slowing down a part of the
review process, but I'd like to float the idea that there should be a
minimum review period for patches that change existing functionality
Hi Folks,
As highlighted in the thread minimal review period for functional changes I'd
like to propose that change is https://review.openstack.org/#/c/63209/ is
reverted because:
- It causes inconsistent behaviour in the system, as any existing
default backing files will have ext3
that break backwards compatibility
On 12/29/2013 03:06 AM, Day, Phil wrote:
snip
Basically, I'm not sure what problem you're trying to solve - lets tease that
out, and then talk about how to solve it. Backwards incompatible change
landed might be the problem - but since every reviewer knew
Sent from Samsung Mobile
Original message
From: Pádraig Brady p...@draigbrady.com
Date:
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: Day, Phil philip@hp.com
Subject: Re: [openstack-dev] [nova] - Revert change
files
though at the moment.
Phil
Sent from Samsung Mobile
Original message
From: Pádraig Brady p...@draigbrady.com
Date:
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,Day, Phil philip@hp.com
Subject: Re: [openstack-dev
)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] minimum review period for functional
changes that break backwards compatibility
On 29 December 2013 21:06, Day, Phil philip@hp.com wrote:
What is the minimum review period intended to accomplish? I mean:
everyone
Hi Sean, and Happy New Year :-)
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 30 December 2013 22:05
To: Day, Phil; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] minimum review period for functional
changes
Hi Thierry,
Thanks for a great summary.
I don't really share your view that there is a us vs them attitude emerging
between operators and developers (but as someone with a foot in both camps
maybe I'm just thinking that because otherwise I'd become even more bi-polar
:-)
I would suggest
Would be nice in this specific example though if the actual upgrade impact was
explicitly called out in the commit message.
From the DocImpact it looks as if some Neutron config options are changing
names - in which case the impact would seem to be that running systems have
until the end of
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 10 January 2014 08:54
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] where to expose network quota
On 8 January 2014 03:01, Christopher Yeoh
HI Folks,
The original (and fairly simple) driver behind whole-host-allocation
(https://wiki.openstack.org/wiki/WholeHostAllocation) was to enable users to
get guaranteed isolation for their instances. This then grew somewhat along
the lines of If they have in effect a dedicated hosts then
So, I actually don't think the two concepts (reservations and
isolated instances) are competing ideas. Isolated instances are
actually not reserved. They are simply instances that have a
condition placed on their assignment to a particular compute node
that the node must only be
Hi Phil and Jay,
Phil, maybe you remember I discussed with you about the possibility of using
pclouds with Climate, but we finally ended up using Nova aggregates and a
dedicated filter.
That works pretty fine. We don't use instance_properties
but rather aggregate metadata but the idea
I think there is clear water between this and the existing aggregate based
isolation. I also think this is a different use case from reservations.
It's
*mostly* like a new scheduler hint, but because it has billing impacts I
think it
needs to be more than just that - for example the
-Original Message-
From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
Sent: 21 January 2014 14:21
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds
Exactly - that's why I wanted
Cool. I like this a good bit better as it avoids the reboot. Still, this is
a rather
large amount of data to copy around if I'm only changing a single file in
Nova.
I think in most cases transfer cost is worth it to know you're deploying what
you tested. Also it is pretty easy to
On 01/22/2014 12:17 PM, Dan Prince wrote:
I've been thinking a bit more about how TripleO updates are developing
specifically with regards to compute nodes. What is commonly called the
update story I think.
As I understand it we expect people to actually have to reboot a compute
node in
HI Sylvain,
The change only makes the user have to supply a network ID if there is more
than one private network available (and the issue there is that otherwise the
assignment order in the Guest is random, which normally leads to all sorts of
routing problems).
I'm running a standard
Hi Justin,
I can see the value of this, but I'm a bit wary of the metadata service
extending into a general API - for example I can see this extending into a
debate about what information needs to be made available about the instances
(would you always want all instances exposed, all details,
I agree its oddly inconsistent (you'll get used to that over time ;-) - but to
me it feels more like the validation is missing on the attach that that the
create should allow two VIFs on the same network. Since these are both
virtualised (i.e share the same bandwidth, don't provide any
in the
order that they want them to be attached to.
Am I still missing something ?
Cheers,
Phil
From: Sylvain Bauza [mailto:sylvain.ba...@bull.net]
Sent: 24 January 2014 14:02
To: OpenStack Development Mailing List (not for usage questions)
Cc: Day, Phil
Subject: Re: [openstack-dev] [Nova] Why Nova
,
security_groups: [
{
name: default
}
],
uuid: 2adcdda2-561b-494b-a8f6-378b07ac47a4
},
... (the above is repeated for every instance)...
]
On Fri, Jan 24, 2014 at 8:43 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Justin
Good points - thank you. For arbitrary operations, I agree that it would be
better to expose a token in the metadata service, rather than allowing the
metadata service to expose unbounded amounts of API functionality. We
should therefore also have a per-instance token in the metadata,
What worried me most, I think, is that if we make this part of the standard
metadata then everyone would get it, and that raises a couple of concerns:
- Users with lots of instances (say 1000's) but who weren't trying to run any
form of discovery would start getting a lot more metadata
-Original Message-
From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 28 January 2014 20:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
through metadata service
Thanks
}
}
if False: # TODO: store ancestor ids
On Tue, Jan 28, 2014 at 4:38 AM, John Garbutt j...@johngarbutt.com
wrote:
On 27 January 2014 14:52, Justin Santa Barbara jus...@fathomdb.com
wrote:
Day, Phil wrote:
We already have a mechanism now where an instance can push
metadata
Hi,
There were a few related blueprints which were looking to add various
additional types of resource to the scheduler - all of which will now be
implemented on top of a new generic mechanism covered by:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
-Original
Hi Folks,
I could do with some pointers on config value deprecation.
All of the examples in the code and documentation seem to deal with the case
of old_opt being replaced by new_opt but still returning the same value
Here using deprecated_name and / or deprecated_opts in the definition of
deprecated options are present, the option corresponding to
the first element of deprecated_opts will be chosen.
I hope that it'll help you.
Best regards,
Denis Makogon.
On Wed, Feb 26, 2014 at 4:17 PM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
I could do
-Original Message-
From: Chris Behrens [mailto:cbehr...@codestud.com]
Sent: 26 February 2014 22:05
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Future of the Nova API
This thread is many messages deep now and I'm busy with a
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 24 February 2014 23:49
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Future of the Nova API
Similarly with a Xen vs KVM situation I don't think its an extension
related issue. In
features
into V2 if V3 was already available in a stable form - but V2 already provides
a nearly complete support for nova-net features on top of Neutron.I fail to
see what is wrong with continuing to improve that.
Phil
-Original Message-
From: Day, Phil
Sent: 28 February 2014 11:07
Sorry if I'm coming late to this thread, but why would you define AZs to cover
othognal zones ?
AZs are a very specific form of aggregate - they provide a particular isolation
schematic between the hosts (i.e. physical hosts are never in more than one AZ)
- hence the availability in the name.
The need arises when you need a way to use both the zones to be used for
scheduling when no specific zone is specified. The only way to do that is
either have a AZ which is a superset of the two AZ or the other way could be
if the default_scheduler_zone can take a list of zones instead
-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: 26 March 2014 20:33
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..
On Mar 26, 2014, at 11:40
-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: 27 March 2014 18:15
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..
On 03/27/2014 11:48 AM, Day, Phil wrote:
Sorry
Personally, I feel it is a mistake to continue to use the Amazon concept
of an availability zone in OpenStack, as it brings with it the
connotation from AWS EC2 that each zone is an independent failure
domain. This characteristic of EC2 availability zones is not enforced in
OpenStack Nova or
Hi Sylvain,
There was a similar thread on this recently - which might be worth reviewing:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html
Some interesting use cases were posted, and a I don't think a conclusion was
reached, which seems to suggest this might be a
Hi Folks,
Generally the scheduler's capabilities that are exposed via hints can be
enabled or disabled in a Nova install by choosing the set of filters that are
configured. However the server group feature doesn't fit that pattern -
even if the affinity filter isn't configured the
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 07 April 2014 21:01
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
So one interesting thing from the influx of new reviews is lots of patches
@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
element, bug or feature ?
On 04/07/2014 02:12 PM, Russell Bryant wrote:
On 04/07/2014 01:43 PM, Day, Phil wrote:
Generally the scheduler's capabilities that are exposed via hints can
be enabled
: EA238CF3 | twt: @justinhopper
On 4/7/14, 10:01, Day, Phil philip@hp.commailto:philip@hp.com
wrote:
I can see the case for Trove being to create an instance within a
customer's tenant (if nothing else it would make adding it onto their
Neutron network a lot easier), but I'm wondering why
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 07 April 2014 19:12
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
element, bug or feature ?
...
I consider it a complete working feature.
Hi Folks,
In the weekly scheduler meeting we've been trying to pull together a
consolidated list of Summit sessions so that we can find logical groupings and
make a more structured set of sessions for the limited time available at the
summit.
relevant proposals to consider.
Regards,
Alex
P.S. I plan to submit a proposal regarding scheduling policies, and
maybe one more related to theme #1 below
From:Day, Phil philip@hp.com
To:OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
Date
the submission deadline and discuss at the following IRC meeting on the 22nd.
Maybe there will be more relevant proposals to consider.
Regards,
Alex
P.S. I plan to submit a proposal regarding scheduling policies, and
maybe one more related to theme #1 below
From:Day, Phil philip@hp.com
Yep, that was the feature I was referring to.
As I said I don't have anything defiant that shows this to be not working (and
the code looks fine) - just wanted to try and simplify the world a bit for a
while.
-Original Message-
From: Melanie Witt [mailto:melw...@yahoo-inc.com]
Sent:
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Disable async network allocation
Hi Phil
2013/10/21 Day, Phil philip@hp.com:
Hi Folks,
I'm trying to track down a couple of obsecure issues in network port
creation where it would be really useful if I could disable
Hi Folks,
We're very occasionally seeing problems where a thread processing a create
hangs (and we've seen when taking to Cinder and Glance). Whilst those issues
need to be hunted down in their own rights, they do show up what seems to me to
be a weakness in the processing of delete requests
Development Mailing List
Subject: Re: [openstack-dev] [nova] Thoughs please on how to address a problem
with mutliple deletes leading to a nova-compute thread pool problem
On 25 October 2013 23:46, Day, Phil philip@hp.com wrote:
Hi Folks,
We're very occasionally seeing problems where a thread
-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 25 October 2013 17:05
To: openstack-dev
Subject: Re: [openstack-dev] [nova] Thoughs please on how to address a
problem with mutliple deletes leading to a nova-compute thread pool
problem
Excerpts from Day,
Hi Drew,
Generally you need to create a new api extention and make some changes in the
main servers.py
The scheduler-hints API extension does this kind of thing, so if you look at:
api/openstack/compute/contrib/scheduler_hints.py for how the extension is
defined, and look in
machines that compose the work nova does.
Sent from my really tiny device...
On Oct 25, 2013, at 3:52 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
We're very occasionally seeing problems where a thread processing
a create hangs (and we've seen when taking
Hi Rob,
I think it looks like a good option - but I'd like to see it exposed as such
to the user rather than a change in the default behavior as such. I.e.
rebuild --keep-ephemenral=True
Phil
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent:
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 06 November 2013 22:08
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Bad review patterns
On 6 November 2013 21:34, Radomir Dopieralski
The hints are coded into the various scheduler filters, so the set supported on
any install depends on what filters have been configured.
I have a change under way (I need to just find the time to go back and fix the
last wave of review comments) to expose what is supported via an API call
Hi Folks,
I'd like to get some eyes on a bug I just filed:
https://bugs.launchpad.net/nova/+bug/1250049
A recent change (https://review.openstack.org/#/c/52189/9 ) introduced the
automatic disable / re-enable of nova-compute when connection to libvirt is
lost and recovered. The problem is
Hi Folks,
I'm a bit confused about the expectations of a manager class to be able to
receive and process messages from a previous RPC version. I thought the
objective was to always make changes such that the manage can process any
previous version of the call that could come from the last
Hi,
I think the concept of allowing users to request a cpu topology, but have a few
questions / concerns:
The host is exposing info about vCPU count it is able to support and the
scheduler picks on that basis. The guest image is just declaring upper limits
on
topology it can support. So
+1 from me - would much prefer to be able to pick this on an individual basis.
Could kind of see a case for keeping reset_network and inject_network_info
together - but don't have a strong feeling about it (as we don't use them)
-Original Message-
From: Andrew Laski
, just not with this particular image
- and that feels like a case that should fail validation at the API layer, not
down on the compute node where the only option is to reschedule or go into an
Error state.
Phil
-Original Message-
From: Day, Phil
Sent: 03 December 2013 12:03
Hi Nova cores,
As per the discussion at the Summit I need two (or more) nova cores to sponsor
the BP that allows Guests a chance to shutdown cleanly rather than just yanking
the virtual power cord out -which is approved and targeted for I2
https://review.openstack.org/#/c/35303/
The Non API
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 09 June 2014 19:03
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
allocation ratio out of scheduler
On 06/09/2014 12:32 PM, Chris Friesen wrote:
On
Hi Chris,
The documentation is NOT the canonical source for the behaviour of the API,
currently the code should be seen as the reference. We've run into issues
before where people have tried to align code to the fit the documentation and
made backwards incompatible changes (although this is
I agree that we need to keep a tight focus on all API changes.
However was the problem with the floating IP change just to do with the
implementation in Nova or the frequency with which Ceilometer was calling it ?
Whatever guildelines we follow on API changes themselves its pretty hard to
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemeral disks are smaller than the current
ones - which seems fair enough I guess (you can't drop arbitary disk content on
resize), except that the because the check is in
impossible to reduce disk unless you have some really nasty guest
additions.
On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemeral
...@rackspace.com]
Sent: 13 June 2014 13:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as
part of resize ?
On 06/13/2014 08:03 AM, Day, Phil wrote:
Theoretically impossible to reduce disk unless you have
Hi Folks,
A recent change introduced a unit test to warn/notify developers when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
Ok - so my change (https://review.openstack.org/#/c/68942) broke it as it adds
some extra parameters
contract
Hi!
On Fri, Jun 13, 2014 at 9:30 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
A recent change introduced a unit test to “warn/notify developers” when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 17 June 2014 15:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
as part of resize ?
On 06/17/2014 10:43 AM,
-Original Message-
From: Ahmed RAHAL [mailto:ara...@iweb.com]
Sent: 18 June 2014 01:21
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] locked instances and snaphot
Hi there,
Le 2014-06-16 15:28, melanie witt a écrit :
Hi all,
[...]
During the
at 11:05:01AM +, Day, Phil wrote:
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 17 June 2014 15:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk
reduction as part
The basic framework for supporting this kind of resource scheduling is the
extensible-resource-tracker:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://review.openstack.org/#/c/86050/
https://review.openstack.org/#/c/71557/
Once that lands being able schedule on
Pipes jaypi...@gmail.com wrote:
On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:
On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:
On 06/13/2014 02:22 PM, Day, Phil wrote:
I guess the question I’m really asking here is: “Since we know
resize down won’t work in all cases
Hi Michael,
Not sure I understand the need for a gap between Juno Spec approval freeze
(Jul 10th) and K opens for spec proposals (Sep 4th).I can understand that
K specs won't get approved in that period, and may not get much feedback from
the cores - but I don't see the harm in letting
people to focus their efforts on fixing bugs in
that period was the main thing. The theory was if we encouraged people
to work on specs for the next release, then they'd be distracted from
fixing the bugs we need fixed in J.
Cheers,
Michael
On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil
, 2014 at 12:53 AM, Day, Phil philip@hp.com wrote:
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 24 June 2014 13:08
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Timeline for the rest of the Juno
release
On 06/24/2014 07
Hi WingWJ,
I agree that we shouldn’t have a task state of None while an operation is in
progress. I’m pretty sure back in the day this didn’t use to be the case and
task_state stayed as Scheduling until it went to Networking (now of course
networking and BDM happen in parallel, so you have
1 - 100 of 162 matches
Mail list logo