Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-07 Thread Dmitriy Shulyak
> > Also very important to understand that if task is mapped to role > controller, but node where you want to apply that task doesn't have this > role - it wont be executed. > Is there a particular reason why we want to restrict a user to run an > arbitrary task on a server, even if server doesn't

Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-07 Thread Dmitriy Shulyak
On Sat, Feb 7, 2015 at 9:42 AM, Andrew Woodward wrote: > Dmitry, >> thanks for sharing CLI options. I'd like to clarify a few things. >> >> > Also very important to understand that if task is mapped to role >> controller, but node where you want to apply that task doesn't have this >> role - it w

Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-07 Thread Dmitriy Shulyak
On Thu, Jan 15, 2015 at 6:20 PM, Vitaly Kramskikh wrote: > I want to discuss possibility to add network verification status field for > environments. There are 2 reasons for this: > > 1) One of the most frequent reasons of deployment failure is wrong network > configuration. In the current UI net

Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-07 Thread Luis Pabón
Sage and I talked about this while at Devconf and it seems it *may* be based on something similar to the GlusterFS Native driver. We will be starting discussions on how to create this integration in the ceph-devel email list. Once we have a solution, we will send an update on this email list.

Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-07 Thread Peter Boros
Hi Angus, If causal reads is set in a session, it won't delay all reads, just that specific read that you set if for. Let's say you have 4 sessions, in one of them you set causal reads, the other 3 won't wait on anything. The read in the one session that you set this in will be delayed, in the oth

Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-07 Thread Ben Swartzlander
On 02/07/2015 07:42 AM, Luis Pabón wrote: Sage and I talked about this while at Devconf and it seems it *may* be based on something similar to the GlusterFS Native driver. We will be starting discussions on how to create this integration in the ceph-devel email list. Once we have a solution,

Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-07 Thread Luis Pabón
Not really sure. The Ganesha FSAL could be another possibility, but we are just starting the discussions. Hopefully we we'll know more soon. - Luis On 02/07/2015 05:30 PM, Ben Swartzlander wrote: On 02/07/2015 07:42 AM, Luis Pabón wrote: Sage and I talked about this while at Devconf and it s

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Eric Windisch
> > > 1) Cherry pick scheduler code from Nova, which already has a working a > filter scheduler design. > 2) Integrate swarmd to leverage its scheduler[2]. I see #2 as not an alternative but possibly an "also". Swarm uses the Docker API, although they're only about 75% compatible at the moment. I

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Steven Dake (stdake)
From: Eric Windisch mailto:e...@windisch.us>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" mailto:openstack-dev@lists.openstack.org>> Date: Saturday, February 7, 2015 at 10:09 AM To: "OpenStack Development Mailing List (not for usage questions)" mailto:openstack-dev@

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Davanum Srinivas
I second that. my first instinct is the same as Steve. -- dims On Sat, Feb 7, 2015 at 1:24 PM, Steven Dake (stdake) wrote: > > > From: Eric Windisch > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Saturday, February 7, 2015 at 10:09 AM > To: "OpenStack Deve

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Adrian Otto
Ok, so if we proceed using Swarm as our first pursuit, and we want to add things to Swarm like scheduling hints, we should open a Magnum bug ticket to track each of the upstream patches, and I can help to bird dog those. We should not shy away from upstream enhancements until we get firm feedbac

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Jay Lau
Sorry for late. I'm OK with both nova scheduler and swarm as both of them using same logic for scheduling: filter + strategy (weight), and the code structure/logic is also very similar between nova scheduler and swarm. In my understanding, even if we use swarm and translate go to python, after thi