,
security_groups: [
{
name: default
}
],
uuid: 2adcdda2-561b-494b-a8f6-378b07ac47a4
},
... (the above is repeated for every instance)...
]
On Fri, Jan 24, 2014 at 8:43 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Justin
Good points - thank you. For arbitrary operations, I agree that it would be
better to expose a token in the metadata service, rather than allowing the
metadata service to expose unbounded amounts of API functionality. We
should therefore also have a per-instance token in the metadata,
So, I actually don't think the two concepts (reservations and
isolated instances) are competing ideas. Isolated instances are
actually not reserved. They are simply instances that have a
condition placed on their assignment to a particular compute node
that the node must only be
Hi Phil and Jay,
Phil, maybe you remember I discussed with you about the possibility of using
pclouds with Climate, but we finally ended up using Nova aggregates and a
dedicated filter.
That works pretty fine. We don't use instance_properties
but rather aggregate metadata but the idea
I think there is clear water between this and the existing aggregate based
isolation. I also think this is a different use case from reservations.
It's
*mostly* like a new scheduler hint, but because it has billing impacts I
think it
needs to be more than just that - for example the
-Original Message-
From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
Sent: 21 January 2014 14:21
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds
Exactly - that's why I wanted
HI Folks,
The original (and fairly simple) driver behind whole-host-allocation
(https://wiki.openstack.org/wiki/WholeHostAllocation) was to enable users to
get guaranteed isolation for their instances. This then grew somewhat along
the lines of If they have in effect a dedicated hosts then
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 10 January 2014 08:54
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] where to expose network quota
On 8 January 2014 03:01, Christopher Yeoh
Would be nice in this specific example though if the actual upgrade impact was
explicitly called out in the commit message.
From the DocImpact it looks as if some Neutron config options are changing
names - in which case the impact would seem to be that running systems have
until the end of
Hi Thierry,
Thanks for a great summary.
I don't really share your view that there is a us vs them attitude emerging
between operators and developers (but as someone with a foot in both camps
maybe I'm just thinking that because otherwise I'd become even more bi-polar
:-)
I would suggest
Hi Sean, and Happy New Year :-)
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 30 December 2013 22:05
To: Day, Phil; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] minimum review period for functional
changes
Sent from Samsung Mobile
Original message
From: Pádraig Brady p...@draigbrady.com
Date:
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: Day, Phil philip@hp.com
Subject: Re: [openstack-dev] [nova] - Revert change
files
though at the moment.
Phil
Sent from Samsung Mobile
Original message
From: Pádraig Brady p...@draigbrady.com
Date:
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,Day, Phil philip@hp.com
Subject: Re: [openstack-dev
)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] minimum review period for functional
changes that break backwards compatibility
On 29 December 2013 21:06, Day, Phil philip@hp.com wrote:
What is the minimum review period intended to accomplish? I mean:
everyone
On 29 December 2013 04:21, Day, Phil philip@hp.com wrote:
Hi Folks,
I know it may seem odd to be arguing for slowing down a part of the
review process, but I'd like to float the idea that there should be a
minimum review period for patches that change existing functionality
Hi Folks,
As highlighted in the thread minimal review period for functional changes I'd
like to propose that change is https://review.openstack.org/#/c/63209/ is
reverted because:
- It causes inconsistent behaviour in the system, as any existing
default backing files will have ext3
that break backwards compatibility
On 12/29/2013 03:06 AM, Day, Phil wrote:
snip
Basically, I'm not sure what problem you're trying to solve - lets tease that
out, and then talk about how to solve it. Backwards incompatible change
landed might be the problem - but since every reviewer knew
Hi Folks,
I know it may seem odd to be arguing for slowing down a part of the review
process, but I'd like to float the idea that there should be a minimum review
period for patches that change existing functionality in a way that isn't
backwards compatible.
The specific change that got me
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 29 December 2013 05:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] minimum review period for functional
changes that break backwards compatibility
+1, I would make the 14:00 meeting. I often have good intention of making the
21:00 meeting, but it's tough to work in around family life
Sent from Samsung Mobile
Original message
From: Joe Gordon joe.gord...@gmail.com
Date:
To: OpenStack Development Mailing List
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] All I want for Christmas is one more +2
...
On 12/12/2013 09:22 AM, Day, Phil wrote:
Hi Cores,
The Stop, Rescue, and Delete should give guest a chance to shutdown
change https://review.openstack.org/#/c/35303
Hi Cores,
The Stop, Rescue, and Delete should give guest a chance to shutdown change
https://review.openstack.org/#/c/35303/ was approved a couple of days ago, but
failed to merge because the RPC version had moved on. Its rebased and sitting
there with one +2 and a bunch of +1s -would be
Hi Nova cores,
As per the discussion at the Summit I need two (or more) nova cores to sponsor
the BP that allows Guests a chance to shutdown cleanly rather than just yanking
the virtual power cord out -which is approved and targeted for I2
https://review.openstack.org/#/c/35303/
The Non API
Hi,
I think the concept of allowing users to request a cpu topology, but have a few
questions / concerns:
The host is exposing info about vCPU count it is able to support and the
scheduler picks on that basis. The guest image is just declaring upper limits
on
topology it can support. So
+1 from me - would much prefer to be able to pick this on an individual basis.
Could kind of see a case for keeping reset_network and inject_network_info
together - but don't have a strong feeling about it (as we don't use them)
-Original Message-
From: Andrew Laski
, just not with this particular image
- and that feels like a case that should fail validation at the API layer, not
down on the compute node where the only option is to reschedule or go into an
Error state.
Phil
-Original Message-
From: Day, Phil
Sent: 03 December 2013 12:03
Hi Folks,
I'm a bit confused about the expectations of a manager class to be able to
receive and process messages from a previous RPC version. I thought the
objective was to always make changes such that the manage can process any
previous version of the call that could come from the last
Hi Folks,
I'd like to get some eyes on a bug I just filed:
https://bugs.launchpad.net/nova/+bug/1250049
A recent change (https://review.openstack.org/#/c/52189/9 ) introduced the
automatic disable / re-enable of nova-compute when connection to libvirt is
lost and recovered. The problem is
The hints are coded into the various scheduler filters, so the set supported on
any install depends on what filters have been configured.
I have a change under way (I need to just find the time to go back and fix the
last wave of review comments) to expose what is supported via an API call
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent: 06 November 2013 22:08
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Bad review patterns
On 6 November 2013 21:34, Radomir Dopieralski
Hi Rob,
I think it looks like a good option - but I'd like to see it exposed as such
to the user rather than a change in the default behavior as such. I.e.
rebuild --keep-ephemenral=True
Phil
-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net]
Sent:
machines that compose the work nova does.
Sent from my really tiny device...
On Oct 25, 2013, at 3:52 AM, Day, Phil
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,
We're very occasionally seeing problems where a thread processing
a create hangs (and we've seen when taking
Hi Folks,
We're very occasionally seeing problems where a thread processing a create
hangs (and we've seen when taking to Cinder and Glance). Whilst those issues
need to be hunted down in their own rights, they do show up what seems to me to
be a weakness in the processing of delete requests
Development Mailing List
Subject: Re: [openstack-dev] [nova] Thoughs please on how to address a problem
with mutliple deletes leading to a nova-compute thread pool problem
On 25 October 2013 23:46, Day, Phil philip@hp.com wrote:
Hi Folks,
We're very occasionally seeing problems where a thread
-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 25 October 2013 17:05
To: openstack-dev
Subject: Re: [openstack-dev] [nova] Thoughs please on how to address a
problem with mutliple deletes leading to a nova-compute thread pool
problem
Excerpts from Day,
Hi Drew,
Generally you need to create a new api extention and make some changes in the
main servers.py
The scheduler-hints API extension does this kind of thing, so if you look at:
api/openstack/compute/contrib/scheduler_hints.py for how the extension is
defined, and look in
Yep, that was the feature I was referring to.
As I said I don't have anything defiant that shows this to be not working (and
the code looks fine) - just wanted to try and simplify the world a bit for a
while.
-Original Message-
From: Melanie Witt [mailto:melw...@yahoo-inc.com]
Sent:
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Disable async network allocation
Hi Phil
2013/10/21 Day, Phil philip@hp.com:
Hi Folks,
I'm trying to track down a couple of obsecure issues in network port
creation where it would be really useful if I could disable
relevant proposals to consider.
Regards,
Alex
P.S. I plan to submit a proposal regarding scheduling policies, and
maybe one more related to theme #1 below
From:Day, Phil philip@hp.com
To:OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
Date
the submission deadline and discuss at the following IRC meeting on the 22nd.
Maybe there will be more relevant proposals to consider.
Regards,
Alex
P.S. I plan to submit a proposal regarding scheduling policies, and
maybe one more related to theme #1 below
From:Day, Phil philip@hp.com
Hi Folks,
In the weekly scheduler meeting we've been trying to pull together a
consolidated list of Summit sessions so that we can find logical groupings and
make a more structured set of sessions for the limited time available at the
summit.
Hi Folks,
Could I get a review of the following change please:
https://review.openstack.org/#/c/47651/
It fixes a problem where users with the admin role in Neutron can't get a list
of servers.
It may also address this long standing High Importance issue,
Hi Folks,
Could one more core look at the following simple bug fix please:
https://review.openstack.org/#/c/46486/ - which allows the system clean up VMs
from deleted instances.
Its already got one +2 and four +1's
Thanks
Phil
___
OpenStack-dev
request for get scheduler hints API
Day, Phil wrote:
[...]
The change is low risk in that it only adds a new query path to the
scheduler, and does not alter any existing code paths.
[...]
At this point the main issue with this is the distraction it would generate
(it still
needs a bit
Seems like a reasonable FFE to me.
(And congratulations on the baby ;-)
-Original Message-
From: Michael Still [mailto:mi...@stillhq.com]
Sent: 06 September 2013 02:20
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova] FFE request: unix domain socket consoles for
Hi Folks,
As per the meeting this week, I've started an Etherpad to help plan out the
scheduler sessions ahead of the Design Summit:
https://etherpad.openstack.org/IceHouse-Nova-Scheduler-Sessions
Phil
-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
Hi Folks,
Now we have explicit control to limit the version of rpc methods that can be
sent, but I'm wondering what I need to do now to make the next version of a
call adding an additional parameter.
It looks like the current code is really focused on the data types being
passed, rather that
On Aug 13, 2013, at 2:11 AM, Day, Phil wrote:
If we really want to get clean separation between Nova and Neutron in the V3
API should we consider making the Nov aV3 API only accept lists o port ids in
the server create command ?
That way there would be no need to every pass security
Hi All,
If we really want to get clean separation between Nova and Neutron in the V3
API should we consider making the Nov aV3 API only accept lists o port ids in
the server create command ?
That way there would be no need to every pass security group information into
Nova.
Any cross project
From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 26 July 2013 23:16
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] support for multiple active scheduler
policies/drivers
On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson glik...@il.ibm.com wrote:
Russell
Hi Folks,
Finally got around to looking at some of the things I've added to the V2 api
and wanted to move to V3 - and I'm confused about what is and isn't going to be
moved from being an extension. For example:
- I added the extended_floating_ip extension to allow an floating IP
to
+1 to turning it off. Having something that doesn't really work on by default
now we have a threaded API is just wrong
From: Rosa, Andrea (HP Cloud Services)
Sent: 25 July 2013 09:35
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate
Hi Alex,
I'm inclined to agree with others that I'm not sure you need the complexity
that this BP brings to the system.If you want to provide a user with a
choice about how much overcommit they will be exposed to then doing that in
flavours and the aggregate_instance_extra_spec filter
Ceilometer is a great project for taking metrics available in Nova and other
systems and making them available for use by Operations, Billing, Monitoring,
etc - and clearly we should try and avoid having multiple collectors of the
same data.
But making the Nova scheduler dependent on
+1
Just to add to the context - keep in mind that within HP (and I assume other
large organisations such as IBM but can't speak directly from them) there are
now many separate divisions working with Openstack (Public Cloud, Private Cloud
Products, Labs, Consulting, etc), each bringing
-Original Message-
From: Dan Smith [mailto:d...@danplanet.com]
Sent: 16 July 2013 14:51
To: OpenStack Development Mailing List
Cc: Day, Phil
Subject: Re: [openstack-dev] Moving task flow to conductor - concern about
scale
In the original context of using Conductor as a database
Hi Josh,
My idea's really pretty simple - make DB proxy and Task workflow separate
services, and allow people to co-locate them if they want to.
Cheers.
Phil
-Original Message-
From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
Sent: 17 July 2013 14:57
To: OpenStack Development
AM, Day, Phil wrote:
Ceilometer is a great project for taking metrics available in Nova and other
systems and making them available for use by Operations, Billing, Monitoring,
etc - and clearly we should try and avoid having multiple collectors of the
same
data.
But making the Nova
Hi Folks,
Reviewing some the changes to move control flows into conductor made me wonder
about an issue that I haven't seen discussed so far (apologies if it was and
I've missed it):
In the original context of using Conductor as a database proxy then the number
of conductor instances is
-Original Message-
From: David Ripton [mailto:drip...@redhat.com]
Sent: 16 July 2013 15:39
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' instances
from
'deleted' ones?
On 07/15/2013 10:03 AM, Matt Riedemann wrote:
Hi Folks,
I have a change submitted which adds the same clean shutdown logic to stop and
delete that exists for soft reboot - the rational being that its always better
to give a VM a chance to shutdown cleanly if possible even if you're about to
delete it as sometimes other parts of the
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: 02 July 2013 02:04
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Move keypair management out of Nova and into
Keystone?
On 07/01/2013 07:49 PM, Jamie Lennox wrote:
On Mon, 2013-07-01 at
101 - 162 of 162 matches
Mail list logo