Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-07 Thread Dmitriy Shulyak
  Also very important to understand that if task is mapped to role
 controller, but node where you want to apply that task doesn't have this
 role - it wont be executed.
 Is there a particular reason why we want to restrict a user to run an
 arbitrary task on a server, even if server doesn't have a role assigned? I
 think we should be flexible here - if I'm hacking something, I'd like to
 run arbitrary things.


The reason it is not supported is that such behaviour will require two
different endpoints, with quite similar functionality.
In most cases developer will benefit from relying on role mappings, for
instance right now one will be able to test dependent tasks on
different nodes by next commands:
 fuel node --node 1,2,3 --tasks corosync_primary corosync_slave
 fuel node --node 1,2 --tasks controller_service compute_service
IMO it is reasonable requirement for developer to ensure that task is
properly inserted into deployment configuration.

Also there was a discussion to implement an api that will bypass all
nailgun logic and will allow to communicate directly with orchestrator
hooks, like:

 fuel exec  file_with_tasks.yaml

Where file_with_tasks filled with data consumable directly by orchestrator


  fuel node --node 1,2,3 --end netconfig
 I would replace --end - --end-on, in order to show that task specified
 will run as well (to avoid ambiguity)

 This is separate question probably about CLI UX, but still - are we Ok
 with missing an action verb, like deploy? So it might be better to have,
 in my opinion:
 fuel deploy --node 1,2,3 --end netconfig


We may want to put everything that is related to deployment under one CLI
namespace, but IMO we need to be consistent and regular deploy/provision
should be migrated as well. '

 For example if one want to execute only netconfig successors:
  fuel node --node 1,2,3 --start netconfig --skip netconfig
 I would come up with shortcut for one task. To me, it would be way better
 to specify comma-separated tasks:
  fuel deploy --node 1,2,3 --task netconfig[,task2]


I dont like comma-separted notation at all, if majority will think that it
is more readable than whitespace - lets do it.

Question here: if netconfig depends on other tasks, will those be executed
 prior to netconfig? I want both options, execute with prior deps, and
 execute just one particular task.


When tasks provided with --tasks flag - no additional dependencies will be
included. Traversal will be performed only with --end and --start flags.


 As a separate note here, a few question:

1. If particular task fails to execute for some reason, what is the
error handling? Will I be able to see puppet/deployment tool exception
right in the same console, or should I check out some logs? We need to have
perfect UX for errors. Those who will be using CLI to run particular tasks,
will be dealing with errors for 95% of their time.

 There will be UI message that something with that id is failed. But
developer will need to go for logs (astute preferably).
What you are suggesting is doable, but not that trivial.. We will check how
much time this will take, and maybe there is other ways to improve
deployment feedback.


1. I'd love to have some guidance on slave node as well. For instance,
I want to run just netconfig on slave node. How can I do it?

 You mean completely bypassing fuel control plane? Developer will be able
to use underlying tools directly, puppet apply, python, ruby, whatever
that task is using.
We may add a helper, to show all tasks endpoints in single place, but they
can be found easily by usual grep..


1. If I stuck with error in task execution, which is in puppet. Can I
modify puppet module on master node, and re-run the task? (assuming that
updated module will be rsynced to slaves under deployment first)

 Rsync puppet is separate task, so one will need to execute:

 fuel node --node 1,2,3 --tasks rsync_core_puppet netconfig
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-07 Thread Dmitriy Shulyak
On Sat, Feb 7, 2015 at 9:42 AM, Andrew Woodward xar...@gmail.com wrote:

 Dmitry,
 thanks for sharing CLI options. I'd like to clarify a few things.

  Also very important to understand that if task is mapped to role
 controller, but node where you want to apply that task doesn't have this
 role - it wont be executed.
 Is there a particular reason why we want to restrict a user to run an
 arbitrary task on a server, even if server doesn't have a role assigned? I
 think we should be flexible here - if I'm hacking something, I'd like to
 run arbitrary things.

 The way I've seen this work so far is the missing role in the graph
 simply wont be executed, not the requested role


Hi Andrew,

What do you mean by requested role?
If you want to add new role to fuel, lets say redis - adding new group into
deployment configuration is mandatory, here is what it looks like [0]

Then one will need to add tasks that are required for this group (both
custom and basic tasks like hiera netconfig), lets say custom task is
install_redis.

After this is done user will be able to use cli:

 fuel node --node 5 --tasks install_redis OR --end install_redis

[0]
https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/deployment_groups/tasks.yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-07 Thread Dmitriy Shulyak
On Thu, Jan 15, 2015 at 6:20 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 I want to discuss possibility to add network verification status field for
 environments. There are 2 reasons for this:

 1) One of the most frequent reasons of deployment failure is wrong network
 configuration. In the current UI network verification is completely
 optional and sometimes users are even unaware that this feature exists. We
 can warn the user before the start of deployment if network check failed of
 wasn't performed.

 2) Currently network verification status is partially tracked by status of
 the last network verification task. Sometimes its results become stale, and
 the UI removes the task. There are a few cases when the UI does this, like
 changing network settings, adding a new node, etc (you can grep
 removeFinishedNetworkTasks to see all the cases). This definitely should
 be done on backend.



Additional field on cluster like network_check_status? When it will be
populated with result?
I think it will simply duplicate task.status with network_verify name

Network check is not a single task.. Right now there is two, and probably
we will need one more right in this release (setup public network and ping
gateway). And AFAIK there is a need for other pre deployment verifications..

I would prefer to make a separate tab with pre_deployment verifications,
similar to ostf.
But if you guys want to make smth right now, compute status of network
verification based on task with name network_verify,
if you deleted this task from UI (for some reason) just add warning that
verification wasnt performed.
If there is more than one task with network_verify for any given cluster -
pick latest one.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-07 Thread Luis Pabón
Sage and I talked about this while at Devconf and it seems it *may* be 
based on something similar to the GlusterFS Native driver.  We will be 
starting discussions on how to create this integration in the ceph-devel 
email list.  Once we have a solution, we will send an update on this 
email list.


- Luis


On 02/02/2015 08:48 PM, Jake Kugel wrote:

OK, thanks Sebastien and Valeriy.

Jake


Sebastien Han sebastien@enovance.com wrote on 02/02/2015 06:51:10
AM:


From: Sebastien Han sebastien@enovance.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 02/02/2015 06:54 AM
Subject: Re: [openstack-dev] [Manila] Manila driver for CephFS

I believe this will start somewhere after Kilo.


On 28 Jan 2015, at 22:59, Valeriy Ponomaryov

vponomar...@mirantis.com wrote:

Hello Jake,

Main thing, that should be mentioned, is that blueprint has no

assignee. Also, It is created long time ago without any activity after

it.

I did not hear any intentions about it, moreover did not see some,

at least, drafts.

So, I guess, it is open for volunteers.

Regards,
Valeriy Ponomaryov

On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel jku...@us.ibm.com

wrote:

Hi,

I see there is a blueprint for a Manila driver for CephFS here [1]. It
looks like it was opened back in 2013 but still in Drafting state.

Does

anyone know more status about this one?

Thank you,
-Jake

[1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-07 Thread Peter Boros
Hi Angus,

If causal reads is set in a session, it won't delay all reads, just
that specific read that you set if for. Let's say you have 4 sessions,
in one of them you set causal reads, the other 3 won't wait on
anything. The read in the one session that you set this in will be
delayed, in the other 4, it won't be. Also this delay is usually
small. Since the replication itself is synchronous if a node it not
able to keep up with the rest of the cluster in terms of writes, it
will send flow control messages to the other nodes. Flow control means
that it has it's receive queue full, and the other nodes have to wait
until they can do more writes (in case of flow control writes on the
other nodes are blocked until the given node catches up with writes).
So the delay imposed here can't be arbitrarily large.


On Sat, Feb 7, 2015 at 3:00 AM, Angus Lees g...@inodes.org wrote:
 Thanks for the additional details Peter.  This confirms the parts I'd
 deduced from the docs I could find, and is useful knowledge.

 On Sat Feb 07 2015 at 2:24:23 AM Peter Boros peter.bo...@percona.com
 wrote:

 - Like many others said it before me, consistent reads can be achieved
 with wsrep_causal_reads set on in the session.


 So the example was two dependent command-line invocations (write followed by
 read) that have no way to re-use the same DB session (without introducing
 lots of affinity issues that we'd also like to avoid).

 Enabling wsrep_casual_reads makes sure the latter read sees the effects of
 the earlier write, but comes at the cost of delaying all reads by some
 amount depending on the write-load of the galera cluster (if I understand
 correctly).  This additional delay was raised as a concern severe enough not
 to just go down this path.

 Really we don't care about other writes that may have occurred (we always
 need to deal with races against other actors), we just want to ensure our
 earlier write has taken effect on the galera server where we sent the second
 read request.  If we had some way to say wsrep_delay_until $first_txid
 then we we could be sure of read-after-write from a different DB session and
 also (in the vast majority of cases) suffer no additional delay.  An opaque
 sequencer is a generic concept across many of the distributed consensus
 stores I'm familiar with, so this needn't be exposed as a Galera-only quirk.


 Meh, I gather people are bored with the topic at this point.  As I suggested
 much earlier, I'd just enable wsrep_casual_reads on the first request for
 the session and then move on to some other problem ;)

  - Gus

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Peter Boros, Principal Architect, Percona
Telephone: +1 888 401 3401 ext 546
Emergency: +1 888 401 3401 ext 911
Skype: percona.pboros

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Steven Dake (stdake)


From: Eric Windisch e...@windisch.usmailto:e...@windisch.us
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Saturday, February 7, 2015 at 10:09 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


1) Cherry pick scheduler code from Nova, which already has a working a filter 
scheduler design.
2) Integrate swarmd to leverage its scheduler[2].

I see #2 as not an alternative but possibly an also. Swarm uses the Docker 
API, although they're only about 75% compatible at the moment. Ideally, the 
Docker backend would work with both single docker hosts and clusters of Docker 
machines powered by Swarm. It would be nice, however, if scheduler hints could 
be passed from Magnum to Swarm.

Regards,
Eric Windisch

Adrian  Eric,

I would prefer to keep things simple and just integrate directly with swarm and 
leave out any cherry-picking from Nova. It would be better to integrate 
scheduling hints into Swarm, but I’m sure the swarm upstream is busy with 
requests and this may be difficult to achieve.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-07 Thread Ben Swartzlander


On 02/07/2015 07:42 AM, Luis Pabón wrote:
Sage and I talked about this while at Devconf and it seems it *may* be 
based on something similar to the GlusterFS Native driver.  We will be 
starting discussions on how to create this integration in the 
ceph-devel email list.  Once we have a solution, we will send an 
update on this email list.


Sounds good. I've been looking forward to a cephfs driver for a long 
time. Is there any plan to do a cephfs-to-nfs bridge/gateway too?


-Ben



- Luis


On 02/02/2015 08:48 PM, Jake Kugel wrote:

OK, thanks Sebastien and Valeriy.

Jake


Sebastien Han sebastien@enovance.com wrote on 02/02/2015 06:51:10
AM:


From: Sebastien Han sebastien@enovance.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 02/02/2015 06:54 AM
Subject: Re: [openstack-dev] [Manila] Manila driver for CephFS

I believe this will start somewhere after Kilo.


On 28 Jan 2015, at 22:59, Valeriy Ponomaryov

vponomar...@mirantis.com wrote:

Hello Jake,

Main thing, that should be mentioned, is that blueprint has no

assignee. Also, It is created long time ago without any activity after

it.

I did not hear any intentions about it, moreover did not see some,

at least, drafts.

So, I guess, it is open for volunteers.

Regards,
Valeriy Ponomaryov

On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel jku...@us.ibm.com

wrote:

Hi,

I see there is a blueprint for a Manila driver for CephFS here [1]. It
looks like it was opened back in 2013 but still in Drafting state.

Does

anyone know more status about this one?

Thank you,
-Jake

[1] https://blueprints.launchpad.net/manila/+spec/cephfs-driver



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-07 Thread Luis Pabón
Not really sure.  The Ganesha FSAL could be another possibility, but we 
are just starting the discussions. Hopefully we we'll know more soon.


- Luis

On 02/07/2015 05:30 PM, Ben Swartzlander wrote:


On 02/07/2015 07:42 AM, Luis Pabón wrote:
Sage and I talked about this while at Devconf and it seems it *may* 
be based on something similar to the GlusterFS Native driver.  We 
will be starting discussions on how to create this integration in the 
ceph-devel email list. Once we have a solution, we will send an 
update on this email list.


Sounds good. I've been looking forward to a cephfs driver for a long 
time. Is there any plan to do a cephfs-to-nfs bridge/gateway too?


-Ben



- Luis


On 02/02/2015 08:48 PM, Jake Kugel wrote:

OK, thanks Sebastien and Valeriy.

Jake


Sebastien Han sebastien@enovance.com wrote on 02/02/2015 06:51:10
AM:


From: Sebastien Han sebastien@enovance.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 02/02/2015 06:54 AM
Subject: Re: [openstack-dev] [Manila] Manila driver for CephFS

I believe this will start somewhere after Kilo.


On 28 Jan 2015, at 22:59, Valeriy Ponomaryov

vponomar...@mirantis.com wrote:

Hello Jake,

Main thing, that should be mentioned, is that blueprint has no

assignee. Also, It is created long time ago without any activity after

it.

I did not hear any intentions about it, moreover did not see some,

at least, drafts.

So, I guess, it is open for volunteers.

Regards,
Valeriy Ponomaryov

On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel jku...@us.ibm.com

wrote:

Hi,

I see there is a blueprint for a Manila driver for CephFS here 
[1]. It

looks like it was opened back in 2013 but still in Drafting state.

Does

anyone know more status about this one?

Thank you,
-Jake

[1] https://blueprints.launchpad.net/manila/+spec/cephfs-driver



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Eric Windisch


 1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
 2) Integrate swarmd to leverage its scheduler[2].


I see #2 as not an alternative but possibly an also. Swarm uses the
Docker API, although they're only about 75% compatible at the moment.
Ideally, the Docker backend would work with both single docker hosts and
clusters of Docker machines powered by Swarm. It would be nice, however, if
scheduler hints could be passed from Magnum to Swarm.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Davanum Srinivas
I second that. my first instinct is the same as Steve.

-- dims

On Sat, Feb 7, 2015 at 1:24 PM, Steven Dake (stdake) std...@cisco.com wrote:


 From: Eric Windisch e...@windisch.us
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Saturday, February 7, 2015 at 10:09 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


 1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
 2) Integrate swarmd to leverage its scheduler[2].


 I see #2 as not an alternative but possibly an also. Swarm uses the Docker
 API, although they're only about 75% compatible at the moment. Ideally, the
 Docker backend would work with both single docker hosts and clusters of
 Docker machines powered by Swarm. It would be nice, however, if scheduler
 hints could be passed from Magnum to Swarm.

 Regards,
 Eric Windisch


 Adrian  Eric,

 I would prefer to keep things simple and just integrate directly with swarm
 and leave out any cherry-picking from Nova. It would be better to integrate
 scheduling hints into Swarm, but I’m sure the swarm upstream is busy with
 requests and this may be difficult to achieve.

 Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Jay Lau
Sorry for late. I'm OK with both nova scheduler and swarm as both of them
using same logic for scheduling: filter + strategy (weight), and the code
structure/logic is also very similar between nova scheduler and swarm.

In my understanding, even if we use swarm and translate go to python, after
this scheduler is added to magnum, we may notice that it is very similar
with nova scheduler.

Thanks!





2015-02-08 11:01 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  Ok, so if we proceed using Swarm as our first pursuit, and we want to add
 things to Swarm like scheduling hints, we should open a Magnum bug ticket
 to track each of the upstream patches, and I can help to bird dog those. We
 should not shy away from upstream enhancements until we get firm feedback
 suggesting our contributions are out of scope.

  Adrian


  Original message 
 From: Steven Dake (stdake)
 Date:02/07/2015 10:27 AM (GMT-08:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



   From: Eric Windisch e...@windisch.us
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Saturday, February 7, 2015 at 10:09 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


 1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
 2) Integrate swarmd to leverage its scheduler[2].


  I see #2 as not an alternative but possibly an also. Swarm uses the
 Docker API, although they're only about 75% compatible at the moment.
 Ideally, the Docker backend would work with both single docker hosts and
 clusters of Docker machines powered by Swarm. It would be nice, however, if
 scheduler hints could be passed from Magnum to Swarm.

  Regards,
 Eric Windisch


  Adrian  Eric,

  I would prefer to keep things simple and just integrate directly with
 swarm and leave out any cherry-picking from Nova. It would be better to
 integrate scheduling hints into Swarm, but I’m sure the swarm upstream is
 busy with requests and this may be difficult to achieve.

  Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Adrian Otto
Ok, so if we proceed using Swarm as our first pursuit, and we want to add 
things to Swarm like scheduling hints, we should open a Magnum bug ticket to 
track each of the upstream patches, and I can help to bird dog those. We should 
not shy away from upstream enhancements until we get firm feedback suggesting 
our contributions are out of scope.

Adrian


 Original message 
From: Steven Dake (stdake)
Date:02/07/2015 10:27 AM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



From: Eric Windisch e...@windisch.usmailto:e...@windisch.us
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Saturday, February 7, 2015 at 10:09 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


1) Cherry pick scheduler code from Nova, which already has a working a filter 
scheduler design.
2) Integrate swarmd to leverage its scheduler[2].

I see #2 as not an alternative but possibly an also. Swarm uses the Docker 
API, although they're only about 75% compatible at the moment. Ideally, the 
Docker backend would work with both single docker hosts and clusters of Docker 
machines powered by Swarm. It would be nice, however, if scheduler hints could 
be passed from Magnum to Swarm.

Regards,
Eric Windisch

Adrian  Eric,

I would prefer to keep things simple and just integrate directly with swarm and 
leave out any cherry-picking from Nova. It would be better to integrate 
scheduling hints into Swarm, but I’m sure the swarm upstream is busy with 
requests and this may be difficult to achieve.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev