Re: [openstack-dev] [nova-docker] Status update

2015-05-17 Thread Alex Glikson
or scaling Bays. I suppose a scheduling hint might be adequate for this. Adrian On May 17, 2015, at 11:48 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 5/16/2015 10:52 PM, Alex Glikson wrote: If system containers is a viable use-case for Nova, and if Magnum is aiming

Re: [openstack-dev] [nova-docker] Status update

2015-05-16 Thread Alex Glikson
If system containers is a viable use-case for Nova, and if Magnum is aiming at both application containers and system containers, would it make sense to have a new virt driver in nova that would invoke Magnum API for container provisioning and life cycle? This would avoid (some of the) code

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-25 Thread Alex Glikson
Tom Fifield t...@openstack.org wrote on 25/02/2015 06:46:13 AM: On 24/02/15 19:27, Daniel P. Berrange wrote: On Tue, Feb 24, 2015 at 12:05:17PM +0100, Thierry Carrez wrote: Daniel P. Berrange wrote: [...] I'm not familiar with how the translations works, but if they are waiting until

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Alex Glikson
This sounds related to the discussion on the 'Nova clustered hypervisor driver' which started at Juno design summit [1]. Talking to another OpenStack should be similar to talking to vCenter. The idea was that the Cells support could be refactored around this notion as well. Not sure whether

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Alex Glikson
So maybe the problem isn?t having the flavors so much, but in how the user currently has to specific an exact match from that list. If the user could say ?I want a flavor with these attributes? and then the system would find a ?best match? based on criteria set by the cloud admin then would

Re: [openstack-dev] etherpad to track Gant (scheduler) sessions at the Juno summit

2014-04-08 Thread Alex Glikson
It seems that there are also issues around scheduling in environments that comprise non-flat/homogeneous groups of hosts. Perhaps, related to 'clustered hypervisor support in Nova' proposal ( http://summit.openstack.org/cfp/details/145). Not sure whether we need a separate slot for this or not

Re: [openstack-dev] [heat] Can heat automatically create a flavor as part of stack creation?

2014-02-09 Thread Alex Glikson
Heat template orchestrates user actions, while management of flavors is typically admin's job (due to their tight link to the physical hardware configuration, unknown to a regular user). Regards, Alex From: ELISHA, Moshe (Moshe) moshe.eli...@alcatel-lucent.com To:

Re: [openstack-dev] [Nova] bp: nova-ecu-support

2014-02-03 Thread Alex Glikson
Similar capabilities are being introduced here: https://review.openstack.org/#/c/61839/ Regards, Alex From: Kenichi Oomichi oomi...@mxs.nes.nec.co.jp To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 03/02/2014 11:48 AM

Re: [openstack-dev] [Gantt] Scheduler sub-group agenda 1/7

2014-01-06 Thread Alex Glikson
Maybe we can also briefly discuss the status of https://review.openstack.org/#/q/topic:bp/multiple-scheduler-drivers,n,z -- now that a revised implementation is available for review (broken into 4 small patches), and people are back from vacations, would be good to get some attention from

Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-22 Thread Alex Glikson
Great initiative! I would certainly be interested taking part in this -- although I wouldn't necessary claim to be among people with the know-how to design and implement it well. For sure this is going to be a painful but exciting process. Regards, Alex From: Robert Collins

Re: [openstack-dev] [Nova] Does Nova really need an SQL database?

2013-11-19 Thread Alex Glikson
Another possible approach could be that only part of the 50 succeeds (reported back to the user), and then a retry mechanism at a higher level would potentially approach the other partition/scheduler - similar to today's retries. Regards, Alex From: Mike Wilson geekinu...@gmail.com To:

Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder

2013-11-18 Thread Alex Glikson
Boris Pavlovic bpavlo...@mirantis.com wrote on 18/11/2013 08:31:20 AM: Actually schedulers in nova and cinder are almost the same. Well, this is kind of expected, since Cinder scheduler started as a copy-paste of the Nova scheduler :-) But they already started diverging (not sure whether

Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder

2013-11-17 Thread Alex Glikson
10 instances of memcached. Regards, Alex Best regards, Boris Pavlovic On Sun, Nov 10, 2013 at 4:20 PM, Alex Glikson glik...@il.ibm.com wrote: Hi Boris, This is a very interesting approach. How do you envision the life cycle of such a scheduler in terms of code repository, build

Re: [openstack-dev] Split of the openstack-dev list

2013-11-15 Thread Alex Glikson
Sylvain Bauza sylvain.ba...@bull.net wrote on 15/11/2013 11:13:37 AM: On a technical note, as a Stackforge contributor, I'm trying to implement best practices of Openstack coding into my own project, and I'm facing day-to-day issues trying to understand what Oslo libs do or how they can be

Re: [openstack-dev] [Nova] Icehouse Blueprints

2013-11-15 Thread Alex Glikson
Russell Bryant rbry...@redhat.com wrote on 15/11/2013 06:49:31 PM: 3) If you have work planned for Icehouse, please get your blueprints filed as soon as possible. Be sure to set a realistic target milestone. So far, *everyone* has targeted *everything* to icehouse-1, which is set to be

Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Alex Glikson
In fact, there is a blueprint which would enable supporting this scenario without partitioning -- https://blueprints.launchpad.net/nova/+spec/cpu-entitlement The idea is to annotate flavors with CPU allocation guarantees, and enable differentiation between instances, potentially running on the

Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Alex Glikson
and submit for review soon. Regards, Alex De : Alex Glikson [mailto:glik...@il.ibm.com] Envoyé : jeudi 14 novembre 2013 16:13 À : OpenStack Development Mailing List (not for usage questions) Objet : Re: [openstack-dev] [nova] Configure overcommit policy In fact, there is a blueprint which would

[openstack-dev] [Nova] [Ironic] scheduling flow with Ironic?

2013-11-13 Thread Alex Glikson
Hi, Is there a documentation somewhere on the scheduling flow with Ironic? The reason I am asking is because we would like to get virtualized and bare-metal workloads running in the same cloud (ideally with the ability to repurpose physical machines between bare-metal workloads and

Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-12 Thread Alex Glikson
You can consider having a separate host aggregate for Hadoop, and use a combination of AggregateInstanceExtraSpecFilter (with a special flavor mapped to this host aggregate) and AggregateCoreFilter (overriding cpu_allocation_ratio for this host aggregate to be 1). Regards, Alex From:

Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder

2013-11-10 Thread Alex Glikson
Hi Boris, This is a very interesting approach. How do you envision the life cycle of such a scheduler in terms of code repository, build, test, etc? What kind of changes to provisioning APIs do you envision to 'feed' such a scheduler? Any particular reason you didn't mention Neutron? Also,

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-30 Thread Alex Glikson
with flat network and centralized storage, the sharing can be rather minimal). Alex Glikson asked why not go directly to holistic if there is no value in doing Nova-only. Yathi replied to that concern, and let me add some notes. I think there *are* scenarios in which doing Nova- only joint

Re: [openstack-dev] [OpenStack-dev][Nova][Discussion]Blueprint : Auto VM Discovery in OpenStack for existing workload

2013-10-30 Thread Alex Glikson
Maybe a more appropriate approach could be to have a tool/script that does it, as a one time thing. For example, it could make sense in a scenario when Nova DB gets lost or corrupted, a new Nova controller is deployed, and the DB needs to be recreated. Potentially, since Nova DB is primarily a

Re: [openstack-dev] [OpenStack-dev][Nova][Discussion]Blueprint : Auto VM Discovery in OpenStack for existing workload

2013-10-30 Thread Alex Glikson
Russell Bryant rbry...@redhat.com wrote on 30/10/2013 10:20:34 AM: On 10/30/2013 03:13 AM, Alex Glikson wrote: Maybe a more appropriate approach could be to have a tool/script that does it, as a one time thing. For example, it could make sense in a scenario when Nova DB gets lost

Re: [openstack-dev] [Heat] Locking and ZooKeeper - a space oddysey

2013-10-30 Thread Alex Glikson
There is a ZK-backed driver in Nova service heartbeat mechanism ( https://blueprints.launchpad.net/nova/+spec/zk-service-heartbeat) -- would be interesting to know whether it is widely used (might be worth asking at the general ML, or user groups). There have been also discussions on using it

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Alex Glikson
Andrew Laski andrew.la...@rackspace.com wrote on 29/10/2013 11:14:03 PM: [...] Having Nova call into Heat is backwards IMO. If there are specific pieces of information that Nova can expose, or API capabilities to help with orchestration/placement that Heat or some other service would like

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Alex Glikson
placement topic). Thanks, Yathi. On 10/29/13, 2:14 PM, Andrew Laski andrew.la...@rackspace.com wrote: On 10/29/13 at 04:05pm, Mike Spreitzer wrote: Alex Glikson glik...@il.ibm.com wrote on 10/29/2013 03:37:41 AM: 1. I assume that the motivation for rack-level anti-affinity is to survive a rack

Re: [openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-26 Thread Alex Glikson
+1 Regards, Alex Joshua Harlow harlo...@yahoo-inc.com wrote on 26/10/2013 09:29:03 AM: An idea that others and I are having for a similar use case in cinder (or it appears to be similar). If there was a well defined state machine/s in nova with well defined and managed transitions

Re: [openstack-dev] Towards OpenStack Disaster Recovery

2013-10-21 Thread Alex Glikson
Hi Caitlin, Caitlin Bestler caitlin.best...@nexenta.com wrote on 21/10/2013 06:51:36 PM: On 10/21/2013 2:34 AM, Avishay Traeger wrote: Hi all, We (IBM and Red Hat) have begun discussions on enabling Disaster Recovery (DR) in OpenStack. We have created a wiki page with our initial

Re: [openstack-dev] [nova][scheduler] A new blueprint for Nova-scheduler: Policy-based Scheduler

2013-10-16 Thread Alex Glikson
This sounds very similar to https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers We worked on it in Havana, learned a lot from feedbacks during the review cycle, and hopefully will finalize the details at the summit and will be able to continue finish the implementation in

Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-14 Thread Alex Glikson
IMO, the three themes make sense, but I would suggest waiting until the submission deadline and discuss at the following IRC meeting on the 22nd. Maybe there will be more relevant proposals to consider. Regards, Alex P.S. I plan to submit a proposal regarding scheduling policies, and maybe

Re: [openstack-dev] [scheduler] Policy Model

2013-10-14 Thread Alex Glikson
I would suggest not to generalize too much.. e.g., restrict the discussion to PlacementPolicy. If anyone else would want to use a similar construct for other purposes -- it can be generalized later. For example, the notion of 'policy' already exists in other places in OpenStack in the context

Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-10 Thread Alex Glikson
. Regards, Alex [1] https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit [2] https://wiki.openstack.org/wiki/Heat/PolicyExtension Alex Glikson Manager, Cloud

Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-09 Thread Alex Glikson
Good summary. I would also add that in A1 the schedulers (e.g., in Nova and Cinder) could talk to each other to coordinate. Besides defining the policy, and the user-facing APIs, I think we should also outline those cross-component APIs (need to think whether they have to be user-visible, or

Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-08 Thread Alex Glikson
Seems that this can be broken into 3 incremental pieces. First, would be great if the ability to schedule a single 'evacuate' would be finally merged ( https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance ). Then, it would make sense to have the logic that evacuates an

Re: [openstack-dev] [nova] [scheduler] blueprint for host/hypervisor location information

2013-10-01 Thread Alex Glikson
Mike Spreitzer mspre...@us.ibm.com wrote on 01/10/2013 06:58:10 AM: Alex Glikson glik...@il.ibm.com wrote on 09/29/2013 03:30:35 PM: Mike Spreitzer mspre...@us.ibm.com wrote on 29/09/2013 08:02:00 PM: Another reason to prefer host is that we have other resources to locate besides

Re: [openstack-dev] [nova] [scheduler] blueprint for host/hypervisor location information

2013-09-29 Thread Alex Glikson
Mike Spreitzer mspre...@us.ibm.com wrote on 29/09/2013 08:02:00 PM: Another reason to prefer host is that we have other resources to locate besides compute. Good point. Another approach (not necessarily contradicting) could be to specify the location as a property of host aggregate rather

Re: [openstack-dev] Questions related to live migration without target host

2013-09-03 Thread Alex Glikson
I tend to agree with Jake that this check is likely to conflict with the scheduler, and should be removed. Regards, Alex From: Guangya Liu j...@unitedstack.com To: openstack-dev@lists.openstack.org, Date: 03/09/2013 02:03 AM Subject:[openstack-dev] Questions related to live

Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Alex Glikson
It seems that the main concern was that the overridden scheduler properties are taken from the flavor, and not from the aggregate. In fact, there was a consensus that this is not optimal. I think that we can still make some progress in Havana towards per-aggregate overrides, generalizing on

Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Alex Glikson
blueprint On Wed, Aug 28, 2013 at 9:12 AM, Alex Glikson glik...@il.ibm.com wrote: It seems that the main concern was that the overridden scheduler properties are taken from the flavor, and not from the aggregate. In fact, there was a consensus that this is not optimal. I think that we can

Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Alex Glikson
Joe Gordon joe.gord...@gmail.com wrote on 28/08/2013 11:04:45 PM: Well, first, at the moment each of these filters today duplicate the code that handles aggregate-based overrides. So, it would make sense to have it in one place anyway. Second, why duplicating all the filters if this can be

Re: [openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-13 Thread Alex Glikson
Agree. Some enhancements to Nova might be still required (e.g., to handle resource reservations, so that there is enough capacity), but the end-to-end framework probably should be outside of existing services, probably talking to Nova, Ceilometer and potentially other components (maybe Cinder,

Re: [openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread Alex Glikson
There are roughly three cases. 1. Multiple identical instances of the scheduler service. This is typically done to increase scalability, and is already supported (although sometimes may result in provisioning failures due to race conditions between scheduler instances). There is a single queue

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-29 Thread Alex Glikson
, Alex Glikson glik...@il.ibm.com wrote: Russell Bryant rbry...@redhat.com wrote on 24/07/2013 07:14:27 PM: I really like your point about not needing to set things up via a config file. That's fairly limiting since you can't change it on the fly via the API. True. As I pointed

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Alex Glikson
Subject: Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers On 07/23/2013 04:24 PM, Alex Glikson wrote: Russell Bryant rbry...@redhat.com wrote on 23/07/2013 07:19:48 PM: I understand the use case, but can't it just be achieved with 2 flavors

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Alex Glikson
Russell Bryant rbry...@redhat.com wrote on 24/07/2013 07:14:27 PM: I really like your point about not needing to set things up via a config file. That's fairly limiting since you can't change it on the fly via the API. True. As I pointed out in another response, the ultimate goal would be

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Alex Glikson
Russell Bryant rbry...@redhat.com wrote on 23/07/2013 05:35:18 PM: #1 - policy associated with a host aggregate This seems very odd to me. Scheduling policy is what chooses hosts, so having a subset of hosts specify which policy to use seems backwards. This is not what we had in

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Alex Glikson
Russell Bryant rbry...@redhat.com wrote on 23/07/2013 07:19:48 PM: I understand the use case, but can't it just be achieved with 2 flavors and without this new aggreagte-policy mapping? flavor 1 with extra specs to say aggregate A and policy Y flavor 2 with extra specs to say aggregate B

[openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-22 Thread Alex Glikson
Dear all, Following the initial discussions at the last design summit, we have published the design [2] and the first take on the implementation [3] of the blueprint adding support for multiple active scheduler policies/drivers [1]. In a nutshell, the idea is to allow overriding the 'default'