or scaling Bays. I suppose a scheduling hint
might be adequate for this.
Adrian
On May 17, 2015, at 11:48 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:
On 5/16/2015 10:52 PM, Alex Glikson wrote:
If system containers is a viable use-case for Nova, and if Magnum is
aiming
If system containers is a viable use-case for Nova, and if Magnum is
aiming at both application containers and system containers, would it make
sense to have a new virt driver in nova that would invoke Magnum API for
container provisioning and life cycle? This would avoid (some of the) code
Tom Fifield t...@openstack.org wrote on 25/02/2015 06:46:13 AM:
On 24/02/15 19:27, Daniel P. Berrange wrote:
On Tue, Feb 24, 2015 at 12:05:17PM +0100, Thierry Carrez wrote:
Daniel P. Berrange wrote:
[...]
I'm not familiar with how the translations works, but if they are
waiting until
This sounds related to the discussion on the 'Nova clustered hypervisor
driver' which started at Juno design summit [1]. Talking to another
OpenStack should be similar to talking to vCenter. The idea was that the
Cells support could be refactored around this notion as well.
Not sure whether
So maybe the problem isn?t having the flavors so much, but in how the
user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the
system would find a ?best match? based on criteria set by the cloud admin
then would
It seems that there are also issues around scheduling in environments that
comprise non-flat/homogeneous groups of hosts. Perhaps, related to
'clustered hypervisor support in Nova' proposal (
http://summit.openstack.org/cfp/details/145). Not sure whether we need a
separate slot for this or not
Heat template orchestrates user actions, while management of flavors is
typically admin's job (due to their tight link to the physical hardware
configuration, unknown to a regular user).
Regards,
Alex
From: ELISHA, Moshe (Moshe) moshe.eli...@alcatel-lucent.com
To:
Similar capabilities are being introduced here:
https://review.openstack.org/#/c/61839/
Regards,
Alex
From: Kenichi Oomichi oomi...@mxs.nes.nec.co.jp
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date: 03/02/2014 11:48 AM
Maybe we can also briefly discuss the status of
https://review.openstack.org/#/q/topic:bp/multiple-scheduler-drivers,n,z
-- now that a revised implementation is available for review (broken into
4 small patches), and people are back from vacations, would be good to get
some attention from
Great initiative!
I would certainly be interested taking part in this -- although I wouldn't
necessary claim to be among people with the know-how to design and
implement it well. For sure this is going to be a painful but exciting
process.
Regards,
Alex
From: Robert Collins
Another possible approach could be that only part of the 50 succeeds
(reported back to the user), and then a retry mechanism at a higher level
would potentially approach the other partition/scheduler - similar to
today's retries.
Regards,
Alex
From: Mike Wilson geekinu...@gmail.com
To:
Boris Pavlovic bpavlo...@mirantis.com wrote on 18/11/2013 08:31:20 AM:
Actually schedulers in nova and cinder are almost the same.
Well, this is kind of expected, since Cinder scheduler started as a
copy-paste of the Nova scheduler :-) But they already started diverging
(not sure whether
10 instances of memcached.
Regards,
Alex
Best regards,
Boris Pavlovic
On Sun, Nov 10, 2013 at 4:20 PM, Alex Glikson glik...@il.ibm.com
wrote:
Hi Boris,
This is a very interesting approach.
How do you envision the life cycle of such a scheduler in terms of
code repository, build
Sylvain Bauza sylvain.ba...@bull.net wrote on 15/11/2013 11:13:37 AM:
On a technical note, as a Stackforge contributor, I'm trying to
implement best practices of Openstack coding into my own project, and
I'm facing day-to-day issues trying to understand what Oslo libs do or
how they can be
Russell Bryant rbry...@redhat.com wrote on 15/11/2013 06:49:31 PM:
3) If you have work planned for Icehouse, please get your blueprints
filed as soon as possible. Be sure to set a realistic target milestone.
So far, *everyone* has targeted *everything* to icehouse-1, which is
set to be
In fact, there is a blueprint which would enable supporting this scenario
without partitioning --
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement
The idea is to annotate flavors with CPU allocation guarantees, and enable
differentiation between instances, potentially running on the
and submit for review soon.
Regards,
Alex
De : Alex Glikson [mailto:glik...@il.ibm.com]
Envoyé : jeudi 14 novembre 2013 16:13
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [nova] Configure overcommit policy
In fact, there is a blueprint which would
Hi,
Is there a documentation somewhere on the scheduling flow with Ironic?
The reason I am asking is because we would like to get virtualized and
bare-metal workloads running in the same cloud (ideally with the ability
to repurpose physical machines between bare-metal workloads and
You can consider having a separate host aggregate for Hadoop, and use a
combination of AggregateInstanceExtraSpecFilter (with a special flavor
mapped to this host aggregate) and AggregateCoreFilter (overriding
cpu_allocation_ratio for this host aggregate to be 1).
Regards,
Alex
From:
Hi Boris,
This is a very interesting approach.
How do you envision the life cycle of such a scheduler in terms of code
repository, build, test, etc?
What kind of changes to provisioning APIs do you envision to 'feed' such a
scheduler?
Any particular reason you didn't mention Neutron?
Also,
with flat network and centralized storage, the sharing can
be rather minimal).
Alex Glikson asked why not go directly to holistic if there is no
value in doing Nova-only. Yathi replied to that concern, and let me
add some notes. I think there *are* scenarios in which doing Nova-
only joint
Maybe a more appropriate approach could be to have a tool/script that does
it, as a one time thing.
For example, it could make sense in a scenario when Nova DB gets lost or
corrupted, a new Nova controller is deployed, and the DB needs to be
recreated. Potentially, since Nova DB is primarily a
Russell Bryant rbry...@redhat.com wrote on 30/10/2013 10:20:34 AM:
On 10/30/2013 03:13 AM, Alex Glikson wrote:
Maybe a more appropriate approach could be to have a tool/script that
does it, as a one time thing.
For example, it could make sense in a scenario when Nova DB gets lost
There is a ZK-backed driver in Nova service heartbeat mechanism (
https://blueprints.launchpad.net/nova/+spec/zk-service-heartbeat) -- would
be interesting to know whether it is widely used (might be worth asking at
the general ML, or user groups). There have been also discussions on using
it
Andrew Laski andrew.la...@rackspace.com wrote on 29/10/2013 11:14:03 PM:
[...]
Having Nova call into Heat is backwards IMO. If there are specific
pieces of information that Nova can expose, or API capabilities to help
with orchestration/placement that Heat or some other service would like
placement topic).
Thanks,
Yathi.
On 10/29/13, 2:14 PM, Andrew Laski andrew.la...@rackspace.com wrote:
On 10/29/13 at 04:05pm, Mike Spreitzer wrote:
Alex Glikson glik...@il.ibm.com wrote on 10/29/2013 03:37:41 AM:
1. I assume that the motivation for rack-level anti-affinity is to
survive a rack
+1
Regards,
Alex
Joshua Harlow harlo...@yahoo-inc.com wrote on 26/10/2013 09:29:03 AM:
An idea that others and I are having for a similar use case in
cinder (or it appears to be similar).
If there was a well defined state machine/s in nova with well
defined and managed transitions
Hi Caitlin,
Caitlin Bestler caitlin.best...@nexenta.com wrote on 21/10/2013 06:51:36
PM:
On 10/21/2013 2:34 AM, Avishay Traeger wrote:
Hi all,
We (IBM and Red Hat) have begun discussions on enabling Disaster
Recovery
(DR) in OpenStack.
We have created a wiki page with our initial
This sounds very similar to
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
We worked on it in Havana, learned a lot from feedbacks during the review
cycle, and hopefully will finalize the details at the summit and will be
able to continue finish the implementation in
IMO, the three themes make sense, but I would suggest waiting until the
submission deadline and discuss at the following IRC meeting on the 22nd.
Maybe there will be more relevant proposals to consider.
Regards,
Alex
P.S. I plan to submit a proposal regarding scheduling policies, and maybe
I would suggest not to generalize too much.. e.g., restrict the discussion
to PlacementPolicy. If anyone else would want to use a similar construct
for other purposes -- it can be generalized later.
For example, the notion of 'policy' already exists in other places in
OpenStack in the context
.
Regards,
Alex
[1]
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit
[2] https://wiki.openstack.org/wiki/Heat/PolicyExtension
Alex Glikson
Manager, Cloud
Good summary. I would also add that in A1 the schedulers (e.g., in Nova
and Cinder) could talk to each other to coordinate. Besides defining the
policy, and the user-facing APIs, I think we should also outline those
cross-component APIs (need to think whether they have to be user-visible,
or
Seems that this can be broken into 3 incremental pieces. First, would be
great if the ability to schedule a single 'evacuate' would be finally
merged (
https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance
). Then, it would make sense to have the logic that evacuates an
Mike Spreitzer mspre...@us.ibm.com wrote on 01/10/2013 06:58:10 AM:
Alex Glikson glik...@il.ibm.com wrote on 09/29/2013 03:30:35 PM:
Mike Spreitzer mspre...@us.ibm.com wrote on 29/09/2013 08:02:00 PM:
Another reason to prefer host is that we have other resources to
locate besides
Mike Spreitzer mspre...@us.ibm.com wrote on 29/09/2013 08:02:00 PM:
Another reason to prefer host is that we have other resources to
locate besides compute.
Good point. Another approach (not necessarily contradicting) could be to
specify the location as a property of host aggregate rather
I tend to agree with Jake that this check is likely to conflict with the
scheduler, and should be removed.
Regards,
Alex
From: Guangya Liu j...@unitedstack.com
To: openstack-dev@lists.openstack.org,
Date: 03/09/2013 02:03 AM
Subject:[openstack-dev] Questions related to live
It seems that the main concern was that the overridden scheduler
properties are taken from the flavor, and not from the aggregate. In fact,
there was a consensus that this is not optimal.
I think that we can still make some progress in Havana towards
per-aggregate overrides, generalizing on
blueprint
On Wed, Aug 28, 2013 at 9:12 AM, Alex Glikson glik...@il.ibm.com wrote:
It seems that the main concern was that the overridden scheduler
properties are taken from the flavor, and not from the aggregate. In fact,
there was a consensus that this is not optimal.
I think that we can
Joe Gordon joe.gord...@gmail.com wrote on 28/08/2013 11:04:45 PM:
Well, first, at the moment each of these filters today duplicate the
code that handles aggregate-based overrides. So, it would make sense
to have it in one place anyway. Second, why duplicating all the
filters if this can be
Agree. Some enhancements to Nova might be still required (e.g., to handle
resource reservations, so that there is enough capacity), but the
end-to-end framework probably should be outside of existing services,
probably talking to Nova, Ceilometer and potentially other components
(maybe Cinder,
There are roughly three cases.
1. Multiple identical instances of the scheduler service. This is
typically done to increase scalability, and is already supported (although
sometimes may result in provisioning failures due to race conditions
between scheduler instances). There is a single queue
, Alex Glikson glik...@il.ibm.com
wrote:
Russell Bryant rbry...@redhat.com wrote on 24/07/2013 07:14:27 PM:
I really like your point about not needing to set things up via a
config
file. That's fairly limiting since you can't change it on the fly
via
the API.
True. As I pointed
Subject: Re: [openstack-dev] [Nova] support for multiple active
scheduler
policies/drivers
On 07/23/2013 04:24 PM, Alex Glikson wrote:
Russell Bryant rbry...@redhat.com wrote on 23/07/2013 07:19:48 PM:
I understand the use case, but can't it just be achieved with 2
flavors
Russell Bryant rbry...@redhat.com wrote on 24/07/2013 07:14:27 PM:
I really like your point about not needing to set things up via a config
file. That's fairly limiting since you can't change it on the fly via
the API.
True. As I pointed out in another response, the ultimate goal would be
Russell Bryant rbry...@redhat.com wrote on 23/07/2013 05:35:18 PM:
#1 - policy associated with a host aggregate
This seems very odd to me. Scheduling policy is what chooses hosts,
so
having a subset of hosts specify which policy to use seems backwards.
This is not what we had in
Russell Bryant rbry...@redhat.com wrote on 23/07/2013 07:19:48 PM:
I understand the use case, but can't it just be achieved with 2 flavors
and without this new aggreagte-policy mapping?
flavor 1 with extra specs to say aggregate A and policy Y
flavor 2 with extra specs to say aggregate B
Dear all,
Following the initial discussions at the last design summit, we have
published the design [2] and the first take on the implementation [3] of
the blueprint adding support for multiple active scheduler
policies/drivers [1].
In a nutshell, the idea is to allow overriding the 'default'
48 matches
Mail list logo