Thanks for letting us know Michael, and thanks for doing it in such a moving
way.Sad news indeed
Phil
From: Michael Still [mailto:mi...@stillhq.com]
Sent: 08 April 2015 05:49
To: OpenStack Development Mailing List
Subject: [openstack-dev] In loving memory of Chris Yeoh
It is my sad duty
Hi Folks,
Is there any support yet in novaclient for requesting a specific microversion ?
(looking at the final leg of extending clean-shutdown to the API, and
wondering how to test this in devstack via the novaclient)
Phil
Hi,
Your problem is that you still have the original ram filter configured, so its
still removing all of the hosts. Try removing that and you should be OK. Note
though that then any hosts not in an aggregate with a ram ratio set won't have
a ram limit at all.
You might also find the
I think it's counted by tenant not user, so can you re-run the db query based
on tenant ?
Sent from Samsung Mobile
Original message
From: Don Waterloo
Date:05/11/2014 17:29 (GMT+01:00)
To: openstack@lists.openstack.org
Subject: [Openstack] nova absolute-limits versus usage
-Original Message-
From: henry hly [mailto:henry4...@gmail.com]
Sent: 08 October 2014 09:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
cascading
Hi,
Good questions: why not just
Hi Jay,
So just to be clear, are you saying that we should generate 2
notification messages on Rabbit for every DB update? That feels
like a big overkill for me. If I follow that login then the current
state transition notifications should also be changed to Starting to
update
I think the expectation is that if a user is already interaction with Neutron
to create ports then they should do the security group assignment in Neutron as
well.
The trouble I see with supporting this way of assigning security groups is what
should the correct behavior be if the user passes
I think we should aim to /always/ have 3 notifications using a pattern
of
try:
...notify start...
...do the work...
...notify end...
except:
...notify abort...
Precisely my viewpoint as well. Unless we standardize on the above, our
Hi Folks,
I'd like to get some opinions on the use of pairs of notification messages for
simple events. I get that for complex operations on an instance (create,
rebuild, etc) a start and end message are useful to help instrument progress
and how long the operations took.However we also
, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:
Hi Folks,
I'd like to get some opinions on the use of pairs of notification
messages for simple events. I get that for complex operations on
an instance (create, rebuild, etc) a start and end message are useful
to help instrument
DevStack doesn't register v2.1 endpoint to keytone now, but we can use
it with calling it directly.
It is true that it is difficult to use v2.1 API now and we can check
its behavior via v3 API instead.
I posted a patch[1] for registering v2.1 endpoint to keystone, and I confirmed
-Original Message-
From: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
Sent: 18 September 2014 02:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] are we going to remove the novaclient
v3 shell or what?
-Original
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 12 September 2014 19:37
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Expand resource name allowed
characters
Had to laugh about the PILE OF POO character :) Comments inline...
Can we
Hi,
I'd like to ask for a FFE for the 3 patchsets that implement quotas for server
groups.
Server groups (which landed in Icehouse) provides a really useful anti-affinity
filter for scheduling that a lot of customers woudl like to use, but without
some form of quota control to limit the
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 05 September 2014 11:49
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out
virt drivers
On 09/05/2014 03:02 AM, Sylvain Bauza wrote:
Ahem,
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] FFE server-group-quotas
On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
2014-09-05 21:56 GMT+09:00 Day, Phil philip@hp.com:
Hi,
I'd like to ask for a FFE for the 3 patchsets that implement quotas for
server
-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com]
Sent: 03 September 2014 10:50
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for
Juno
snip
I will follow up with a more detailed email about what I
One final note: the specs referenced above didn't get approved until
Spec Freeze, which seemed to leave me with less time to implement
things. In fact, it seemed that a lot of specs didn't get approved
until spec freeze. Perhaps if we had more staggered approval of
specs, we'd have
Hi Daniel,
Thanks for putting together such a thoughtful piece - I probably need to
re-read it few times to take in everything you're saying, but a couple of
thoughts that did occur to me:
- I can see how this could help where a change is fully contained within a virt
driver, but I wonder
Adding in such case more bureaucracy (specs) is not the best way to resolve
team throughput issues...
I’d argue that if fundamental design disagreements can be surfaced and debated
at the design stage rather than first emerging on patch set XXX of an
implementation, and be used to then
Needing 3 out of 19 instead of 3 out of 20 isn't an order of magnatude
according to my calculator. Its much closer/fairer than making it 2/19 vs
3/20.
If a change is borderline in that it can only get 2 other cores maybe it
doesn't have a strong enough case for an exception.
Phil
Sent
:31AM -0400, Jay Pipes wrote:
On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
On 08/20/2014 08:27 AM, Joe Gordon wrote:
On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
mailto:philip@hp.com wrote:
-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com
-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com]
Sent: 19 August 2014 17:50
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource
Tracking
On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
On the other
wrote:
On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
Hi Folks,
I'd like to propose the following as an exception to the spec freeze, on
the
basis that it addresses a potential data corruption issues in the Guest.
https://review.openstack.org/#/c/89650
We were pretty
Hi Folks,
I'd like to propose the following as an exception to the spec freeze, on the
basis that it addresses a potential data corruption issues in the Guest.
https://review.openstack.org/#/c/89650
We were pretty close to getting acceptance on this before, apart from a debate
over whether
Sorry, forgot to put this in my previous message. I've been advocating the
ability to use names instead of UUIDs for server groups pretty much since I
saw them last year.
I'd like to just enforce that server group names must be unique within a
tenant, and then allow names to be used
Hi Folks,
I noticed a couple of changes that have just merged to allow the server group
hints to be specified by name (some legacy behavior around automatically
creating groups).
https://review.openstack.org/#/c/83589/
https://review.openstack.org/#/c/86582/
But group names aren't constrained
Hi Melanie,
I have a BP (https://review.openstack.org/#/c/89650) and the first couple of
bits of implementation (https://review.openstack.org/#/c/68942/
https://review.openstack.org/#/c/99916/) out for review on this very topic ;-)
Phil
-Original Message-
From: melanie witt
Hi Folks,
Working on the server groups quotas I hit an issue with the limits API which I
wanted to get feedback on.
Currently this always shows just the project level quotas and usage, which can
be confusing if there is a lower user specific quota. For example:
Project Quota = 10
User Quota
should I need to submit a patch/spec to add it now?
On Wed, Jun 25, 2014 at 5:53 PM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Looking at this a bit deeper the comment in _start_buidling() says that its
doing this to “Save the host and launched_on fields and log appropriately
-Original Message-
From: Ahmed RAHAL [mailto:ara...@iweb.com]
Sent: 25 June 2014 20:25
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] should we have a stale data indication in
nova list/show?
Le 2014-06-25 14:26, Day, Phil a écrit :
-Original
, 2014 at 12:53 AM, Day, Phil philip@hp.com wrote:
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 24 June 2014 13:08
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Timeline for the rest of the Juno
release
On 06/24/2014 07
Hi WingWJ,
I agree that we shouldn’t have a task state of None while an operation is in
progress. I’m pretty sure back in the day this didn’t use to be the case and
task_state stayed as Scheduling until it went to Networking (now of course
networking and BDM happen in parallel, so you have
this whole update
might just be not needed – although I still like the idea of a state to show
that the request has been taken off the queue by the compute manager.
From: Day, Phil
Sent: 25 June 2014 10:35
To: OpenStack Development Mailing List
Subject: RE: [openstack-dev] [nova] Why
I think there’s a bit more to it that just having an aggregate:
- Ironic provides its own version of the Host manager class for the
scheduler, I’m not sure if that is fully compatible with the non-ironic case.
Even in the BP for merging the Ironic driver back into Nova it still looks
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 25 June 2014 11:49
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] should we have a stale data indication in
nova list/show?
On 06/25/2014 04:28 AM, Belmiro
The basic framework for supporting this kind of resource scheduling is the
extensible-resource-tracker:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://review.openstack.org/#/c/86050/
https://review.openstack.org/#/c/71557/
Once that lands being able schedule on
Pipes jaypi...@gmail.com wrote:
On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:
On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:
On 06/13/2014 02:22 PM, Day, Phil wrote:
I guess the question I’m really asking here is: “Since we know
resize down won’t work in all cases
Hi Michael,
Not sure I understand the need for a gap between Juno Spec approval freeze
(Jul 10th) and K opens for spec proposals (Sep 4th).I can understand that
K specs won't get approved in that period, and may not get much feedback from
the cores - but I don't see the harm in letting
people to focus their efforts on fixing bugs in
that period was the main thing. The theory was if we encouraged people
to work on specs for the next release, then they'd be distracted from
fixing the bugs we need fixed in J.
Cheers,
Michael
On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 17 June 2014 15:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
as part of resize ?
On 06/17/2014 10:43 AM,
-Original Message-
From: Ahmed RAHAL [mailto:ara...@iweb.com]
Sent: 18 June 2014 01:21
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] locked instances and snaphot
Hi there,
Le 2014-06-16 15:28, melanie witt a écrit :
Hi all,
[...]
During the
at 11:05:01AM +, Day, Phil wrote:
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 17 June 2014 15:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk
reduction as part
contract
Hi!
On Fri, Jun 13, 2014 at 9:30 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
A recent change introduced a unit test to “warn/notify developers” when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
Hi Chris,
The documentation is NOT the canonical source for the behaviour of the API,
currently the code should be seen as the reference. We've run into issues
before where people have tried to align code to the fit the documentation and
made backwards incompatible changes (although this is
I agree that we need to keep a tight focus on all API changes.
However was the problem with the floating IP change just to do with the
implementation in Nova or the frequency with which Ceilometer was calling it ?
Whatever guildelines we follow on API changes themselves its pretty hard to
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemeral disks are smaller than the current
ones - which seems fair enough I guess (you can't drop arbitary disk content on
resize), except that the because the check is in
impossible to reduce disk unless you have some really nasty guest
additions.
On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemeral
...@rackspace.com]
Sent: 13 June 2014 13:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as
part of resize ?
On 06/13/2014 08:03 AM, Day, Phil wrote:
Theoretically impossible to reduce disk unless you have
Hi Folks,
A recent change introduced a unit test to warn/notify developers when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
Ok - so my change (https://review.openstack.org/#/c/68942) broke it as it adds
some extra parameters
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 09 June 2014 19:03
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
allocation ratio out of scheduler
On 06/09/2014 12:32 PM, Chris Friesen wrote:
On
Hi Joe,
Can you give some examples of what that data would be used for ?
It sounds on the face of it that what you’re looking for is pretty similar to
what Extensible Resource Tracker sets out to do
(https://review.openstack.org/#/c/86050
https://review.openstack.org/#/c/71557)
Phil
From:
want to lie to your
users!
[Day, Phil] I agree that there is a problem with having every new option we add
in extra_specs leading to a new set of flavors.There are a number of
changes up for review to expose more hypervisor capabilities via extra_specs
that also have this potential problem
want to lie to your
users!
[Day, Phil] BTW you might be able to (nearly) do this already if you define
aggregates for the two QoS pools, and limit which projects can be scheduled
into those pools using the AggregateMultiTenancyIsolation filter.I say
nearly because as pointed out by this spec
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 04 June 2014 19:23
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
allocation ratio out of scheduler
On 06/04/2014 11:56 AM, Day, Phil wrote:
Hi Jay
Hi Dan,
On a compute manager that is still running the old version of the code
(i.e using the previous object version), if a method that hasn't yet
been converted to objects gets a dict created from the new version of
the object (e.g. rescue, get_console_output), then object_compat()
The patch [2] proposes changing the default DNS driver from
'nova.network.noop_dns_driver.NoopDNSDriver' to other that verifies if
DNS entries already exists before adding them, such as the
'nova.network.minidns.MiniDNS'.
Changing a default setting in a way that isn't backwards compatible
Hi Folks,
I've been working on a change to make the user_data field an optional part of
the Instance object since passing it around everywhere seems a bad idea since:
- It can be huge
- It's only used when getting metadata
- It can contain user sensitive data
-
Could we replace the refresh from the period task with a timestamp in the
network cache of when it was last updated so that we refresh it only when it’s
accessed if older that X ?
From: Aaron Rosen [mailto:aaronoro...@gmail.com]
Sent: 29 May 2014 01:47
To: Assaf Muller
Cc: OpenStack Development
this isn't
the case.
Unfortunately the API removal in Nova was followed by similar changes in
novaclient and Horizon, so fixing Icehouse at this point is probably going to
be difficult.
[Day, Phil] I think we should revert the changes in all three system then.
We have the rules about
Hi Vish,
I think quota classes have been removed from Nova now.
Phil
Sent from Samsung Mobile
Original message
From: Vishvananda Ishaya
Date:27/05/2014 19:24 (GMT+00:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova]
-Original Message-
From: Tripp, Travis S
Sent: 07 May 2014 18:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use
cases for volume's admin_metadata, metadata and glance_image_metadata
We're
In the original API there was a way to remove members from the group.
This didn't make it into the code that was submitted.
Well, it didn't make it in because it was broken. If you add an instance to a
group after it's running, a migration may need to take place in order to keep
the
Nova now can detect host unreachable. But it fails to make out host isolation,
host dead and nova compute service down. When host unreachable is reported,
users have to find out the exact state by himself and then take the
appropriate measure to recover. Therefore we'd like to improve the host
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 25 April 2014 23:29
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Proposal: remove the server groups
feature
On Fri, 2014-04-25 at 22:00 +, Day, Phil wrote:
Hi Jay,
I'm going
Hi Jay,
I'm going to disagree with you on this one, because:
i) This is a feature that was discussed in at least one if not two Design
Summits and went through a long review period, it wasn't one of those changes
that merged in 24 hours before people could take a good look at it. Whatever
Is that right, and any reason why the default for
vif_plugging_is_fatal shouldn't be False insated of True to make this
sequence less dependent on matching config changes ?
Yes, because the right approach to a new deployment is to have this
enabled. If it was disabled by default, most
On 04/15/2014 11:01 AM, Brian Elliott wrote:
* specs review. The new blueprint process is a work of genius, and I
think its already working better than what we've had in previous
releases. However, there are a lot of blueprints there in review, and
we need to focus on making sure these
I would like to announce my TC candidacy.
I work full time for HP where I am the architect and technical lead for the
core OpenStack Engineering team, with responsibility for the architecture and
deployment of the OpenStack Infrastructure projects (Nova, Neutron, Cinder,
Glance, Swift) across
Hi Folks,
Sorry for being a tad slow on the uptake here, but I'm trying to understand the
sequence of updates required to move from a system that doesn't have external
events configured between Neutron and Nova and one that does (The new
nova-specs repo would have captured this as part of
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 08 April 2014 13:13
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
element, bug or feature ?
On 04/08/2014 06:29 AM, Day, Phil wrote
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 08 April 2014 14:25
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
possible or not ?
On Tue, 2014-04-08 at 10:49 +, Day, Phil wrote
, 2014-04-08 at 10:49 +, Day, Phil wrote:
On a large cloud you're protect against this to some extent if the
number of servers is number of instances in the quota.
However it does feel that there are a couple of things missing to
really provide some better protection
-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: 09 April 2014 15:37
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
possible or not ?
On 04/09/2014 03:55 AM, Day, Phil wrote:
I
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 07 April 2014 21:01
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
So one interesting thing from the influx of new reviews is lots of patches
@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
element, bug or feature ?
On 04/07/2014 02:12 PM, Russell Bryant wrote:
On 04/07/2014 01:43 PM, Day, Phil wrote:
Generally the scheduler's capabilities that are exposed via hints can
be enabled
: EA238CF3 | twt: @justinhopper
On 4/7/14, 10:01, Day, Phil philip@hp.commailto:philip@hp.com
wrote:
I can see the case for Trove being to create an instance within a
customer's tenant (if nothing else it would make adding it onto their
Neutron network a lot easier), but I'm wondering why
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: 07 April 2014 19:12
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
element, bug or feature ?
...
I consider it a complete working feature.
and then “lock” them after it is complete.
Thanks,
Justin Hopper
Software Engineer - DBaaS
irc: juice | gpg: EA238CF3 | twt: @justinhopper
On 4/7/14, 10:01, Day, Phil philip@hp.com wrote:
I can see the case for Trove being to create an instance within a
customer's tenant
On a large cloud you're protect against this to some extent if the number of
servers is number of instances in the quota.
However it does feel that there are a couple of things missing to really
provide some better protection:
- A quota value on the maximum size of a server group
-
Hi Sylvain,
There was a similar thread on this recently - which might be worth reviewing:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html
Some interesting use cases were posted, and a I don't think a conclusion was
reached, which seems to suggest this might be a
Hi Folks,
Generally the scheduler's capabilities that are exposed via hints can be
enabled or disabled in a Nova install by choosing the set of filters that are
configured. However the server group feature doesn't fit that pattern -
even if the affinity filter isn't configured the
Personally, I feel it is a mistake to continue to use the Amazon concept
of an availability zone in OpenStack, as it brings with it the
connotation from AWS EC2 that each zone is an independent failure
domain. This characteristic of EC2 availability zones is not enforced in
OpenStack Nova or
Sorry if I'm coming late to this thread, but why would you define AZs to cover
othognal zones ?
AZs are a very specific form of aggregate - they provide a particular isolation
schematic between the hosts (i.e. physical hosts are never in more than one AZ)
- hence the availability in the name.
The need arises when you need a way to use both the zones to be used for
scheduling when no specific zone is specified. The only way to do that is
either have a AZ which is a superset of the two AZ or the other way could be
if the default_scheduler_zone can take a list of zones instead
-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: 26 March 2014 20:33
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..
On Mar 26, 2014, at 11:40
-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: 27 March 2014 18:15
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..
On 03/27/2014 11:48 AM, Day, Phil wrote:
Sorry
-Original Message-
From: Chris Behrens [mailto:cbehr...@codestud.com]
Sent: 26 February 2014 22:05
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Future of the Nova API
This thread is many messages deep now and I'm busy with a
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 24 February 2014 23:49
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Future of the Nova API
Similarly with a Xen vs KVM situation I don't think its an extension
related issue. In
features
into V2 if V3 was already available in a stable form - but V2 already provides
a nearly complete support for nova-net features on top of Neutron.I fail to
see what is wrong with continuing to improve that.
Phil
-Original Message-
From: Day, Phil
Sent: 28 February 2014 11:07
deprecated options are present, the option corresponding to
the first element of deprecated_opts will be chosen.
I hope that it'll help you.
Best regards,
Denis Makogon.
On Wed, Feb 26, 2014 at 4:17 PM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
I could do
Hi Folks,
I could do with some pointers on config value deprecation.
All of the examples in the code and documentation seem to deal with the case
of old_opt being replaced by new_opt but still returning the same value
Here using deprecated_name and / or deprecated_opts in the definition of
Hi,
There were a few related blueprints which were looking to add various
additional types of resource to the scheduler - all of which will now be
implemented on top of a new generic mechanism covered by:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
-Original
-Original Message-
From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 28 January 2014 20:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
through metadata service
Thanks
}
}
if False: # TODO: store ancestor ids
On Tue, Jan 28, 2014 at 4:38 AM, John Garbutt j...@johngarbutt.com
wrote:
On 27 January 2014 14:52, Justin Santa Barbara jus...@fathomdb.com
wrote:
Day, Phil wrote:
We already have a mechanism now where an instance can push
metadata
What worried me most, I think, is that if we make this part of the standard
metadata then everyone would get it, and that raises a couple of concerns:
- Users with lots of instances (say 1000's) but who weren't trying to run any
form of discovery would start getting a lot more metadata
Cool. I like this a good bit better as it avoids the reboot. Still, this is
a rather
large amount of data to copy around if I'm only changing a single file in
Nova.
I think in most cases transfer cost is worth it to know you're deploying what
you tested. Also it is pretty easy to
On 01/22/2014 12:17 PM, Dan Prince wrote:
I've been thinking a bit more about how TripleO updates are developing
specifically with regards to compute nodes. What is commonly called the
update story I think.
As I understand it we expect people to actually have to reboot a compute
node in
HI Sylvain,
The change only makes the user have to supply a network ID if there is more
than one private network available (and the issue there is that otherwise the
assignment order in the Guest is random, which normally leads to all sorts of
routing problems).
I'm running a standard
Hi Justin,
I can see the value of this, but I'm a bit wary of the metadata service
extending into a general API - for example I can see this extending into a
debate about what information needs to be made available about the instances
(would you always want all instances exposed, all details,
1 - 100 of 164 matches
Mail list logo