Re: [openstack-dev] [Fuel][FFE] API handler for serialized graph

2016-03-02 Thread Dmitriy Shulyak
Thanks everyone, patch was merged.

On Tue, Mar 1, 2016 at 6:22 PM, Dmitriy Shulyak <dshul...@mirantis.com>
wrote:

> Hello folks,
>
> I am not sure that i will need FFE, but in case i wont be able to land
> this patch [0] tomorrow - i would like to ask for one in advance. I will
> need FFE for 2-3 days, depends mainly on fuel-web cores availability.
>
> Merging this patch has zero user impact, and i am also using it already
> for several days to test others things (works as expected), so it can be
> considered as risk-free.
>
> 0. https://review.openstack.org/#/c/284293/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] API handler for serialized graph

2016-03-01 Thread Dmitriy Shulyak
Hello folks,

I am not sure that i will need FFE, but in case i wont be able to land this
patch [0] tomorrow - i would like to ask for one in advance. I will need
FFE for 2-3 days, depends mainly on fuel-web cores availability.

Merging this patch has zero user impact, and i am also using it already for
several days to test others things (works as expected), so it can be
considered as risk-free.

0. https://review.openstack.org/#/c/284293/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Solar] SolarDB/ConfigDB place in Fuel

2015-12-15 Thread Dmitriy Shulyak
Hello folks,

This topic is about configuration storage which will connect data sources
(nailgun/bareon/others) and orchestration. And right now we are developing
two projects that will overlap a bit.

I understand there is not enough context to dive into this thread right
away, but i will appreciate if those people, who participated in design,
will add their opinions/clarifications on this matter.

Main disagreements
---
1. configdb should be passive, writing to configdb is someone else
responsibility
+ simpler implementation, easier to use
- we will need another component that will do writing, or split this
responsibility somehow

2. can be used without other solar components
+ clear inteface between solar components and storage layer
- additional work required to design/refactor communication layer between
modules in solar
- some data will be duplicated between solar orchestrator layer and configdb

3. templates for output
technical detail, can be added on top of solardb if required

Similar functionality
--
1. Hierachical storage
2. Versioning of changes
3. Possibility to overwrite config values
4. Schema for inputs

Overall it seems that we share same goals for both services,
the difference lies in organizational and technical implementation details.

Possible solutions

1. develop configdb and solar with duplicated functionality
- at least 2 additional components will be added to the picture,
one is configdb, another one will need to sync data between configdb and
solar
- in some cases data in solar and configdb will be 100% duplicated
- different teams will work on same functionality
- integration of additional component for fuel will require integration with
configdb and with solar
+ configdb will be independent from solar orchestration/other components

2. make service out of solardb, allign with configdb use cases
+ solardb will be independent from solar orchestration/other solar
components
+ integration of fuel component will be easier than in 1st version
+ clarity about components responsibility and new architecture
- redesign/refactoring communication between components in solar

3. do not use configdb/no extraction of solardb
- inproc communication, which can lead to coupled components (not the case
currently)
+ faster implementation (no major changes required for integration with
fuel)
+ clarity about components responsibility and new architecture

Summary
-
For solar it makes no difference where data will come from: configdb or
data sources, but in overall fuel architecture it will lead to significant
complexity increase.
It would be the best to follow 2nd path, because in long term we don't want
tightly coupled components, but in nearest future we need to concentrate
on:
- integration with fuel
- implementing policy engine
- polishing solar components
This is why i am not sure that we can spend time on 2nd path right now,
or even before 9.0.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Modularization] Proposal on Decoupling Serializers from Nailgun

2015-10-22 Thread Dmitriy Shulyak
Hi Oleg,

I want to mention that we are using similar approach for deployment engine,
the difference is that we are working not with components, but with
deployment objects (it could be resources or tasks).
Right now all the data should be provided by user, but we are going to add
concept of managed resource, so that resource will be able to request data
from 3rd party service before execution, or by notification, if it is
supported.
I think this is similar to what Vladimir describes.

As for the components - i see how it can be useful, for example
provisioning service will require data from networking service, but i think
nailgun can act as router for such cases.
This way we will keep components simple and purely functional, and nailgun
will perform a role of a client which knows how to build interaction
between components.

So, as a summary i think this is 2 different problems.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugin deployment questions

2015-10-21 Thread Dmitriy Shulyak
Hi,

Can we ignore the problem above and remove this limitation? Or should
> we improve it somehow so it would work for one nodes, and will be
> ignored for others?
>
I think that this validation needs to be accomplished in a different way,
we don't need 1 controller for the sake of 1 controller,
1 controller is a dependency of compute/cinder/other roles. So from my pov
there is atleast 2 options:

1. Use tasks dependencies, and prevent deployment in case if some tasks
relies on controller.
But the implementation might be complicated

2. Insert required metadata into roles that relies on another roles, for
compute it will be something like:
   compute:
 requires: controller > 1
We actually have DSL for declaring such things, we just need to specify
this requirements from other side.

But in 2nd case we will still need to use tricks, like one provided by
Matt, for certain plugins. So maybe we should spend time and do 1st.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugin deployment questions

2015-10-21 Thread Dmitriy Shulyak
But it will lead to situations, when certain plugins, like
standalone_rabbitmq/standalone_mysql will need to overwrite settings on
*all*
dependent roles, and it might be a problem.. Because, how plugin developer
will be able to know what are those roles?

On Wed, Oct 21, 2015 at 1:01 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Hi Dmitry,
>
> > Insert required metadata into roles that relies on another roles, for
> > compute it will be something like:
> >
> > compute:
> > requires: controller > 1
>
> Yeah, that's actually what I was thinking about when I wrote:
>
> > Or should we improve it somehow so it would work for one nodes,
> > and will be ignored for others?
>
> So I'm +1 for extending our meta information with such dependencies.
>
> Sincerely,
> Igor
>
> On Wed, Oct 21, 2015 at 12:51 PM, Dmitriy Shulyak <dshul...@mirantis.com>
> wrote:
> > Hi,
> >
> >> Can we ignore the problem above and remove this limitation? Or should
> >> we improve it somehow so it would work for one nodes, and will be
> >> ignored for others?
> >
> > I think that this validation needs to be accomplished in a different
> way, we
> > don't need 1 controller for the sake of 1 controller,
> > 1 controller is a dependency of compute/cinder/other roles. So from my
> pov
> > there is atleast 2 options:
> >
> > 1. Use tasks dependencies, and prevent deployment in case if some tasks
> > relies on controller.
> > But the implementation might be complicated
> >
> > 2. Insert required metadata into roles that relies on another roles, for
> > compute it will be something like:
> >compute:
> >  requires: controller > 1
> > We actually have DSL for declaring such things, we just need to specify
> this
> > requirements from other side.
> >
> > But in 2nd case we will still need to use tricks, like one provided by
> Matt,
> > for certain plugins. So maybe we should spend time and do 1st.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugin deployment questions

2015-10-21 Thread Dmitriy Shulyak
On Wed, Oct 21, 2015 at 1:21 PM, Igor Kalnitsky 
wrote:

> We can make bidirectional dependencies, just like our deployment tasks do.


I'm not sure that we are on the same page regarding problem definition.
Imagine the case when we have environment with next set of roles:

1. standalone-rabbitmq
2. standalone-mysql
3. standalone-other-api things
4. compute - requires: controller > 1
5. cinder - requires: controller > 1
6. designate (whatever custom role) - requires: controller > 1

As you see - there is no controller anymore.
And 1, 2, 3 developed by one guy, who knows that he need to overwrite
requirements for 4,5, but he knows nothing about 6.
At the same time developer of 6 role, obviously, knows nothing about
standalone-* things.
What options do we have here?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for fuel-ostf core

2015-09-08 Thread Dmitriy Shulyak
+1

On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova 
wrote:

> +1
>
> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
> tleontov...@mirantis.com> wrote:
>
>> Fuelers,
>>
>> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
>> He’s been doing a great job in writing patches(support for detached
>> services ).
>> Also his review comments always have a lot of detailed information for
>> further improvements
>>
>>
>> http://stackalytics.com/?user_id=asledzinskiy=all_type=all=fuel-ostf
>>
>> Please vote with +1/-1 for approval/objection.
>>
>> Core reviewer approval process definition:
>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>> --
>> Best regards,
>> Tatyana
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Creating roles with fuel client

2015-03-20 Thread Dmitriy Shulyak
Hi team,

I wasnt able to participate in fuel weekly meeting, so for those of you who
are curious
how to create roles with fuel client - here is documentation on this topic
[1].

And here is example how it can be used, together with granular deployment,
to
create new roles and add deployment logic for those roles - [2].

[1] https://review.openstack.org/#/c/162085/
[2]
https://review.openstack.org/#/c/161192/7/pages/reference-architecture/task-deployment/0060-add-new-role.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*

2015-03-03 Thread Dmitriy Shulyak
Hello,

I would vote for 2nd, but i also think that we can generate same
information, on merge for example, that will be printed during first run
and place it directly in repository (maybe even README?). I guess this is
what your 3rd approach is about?

So, can we go with both?





On Tue, Mar 3, 2015 at 4:52 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks!


 According to the refactoring plan [1] we are going to release the 6.1
 version of python-fuelclient which is going to contain recent changes but
 will keep backwards compatibility with what was before. However, the next
 major release will bring users the fresh CLI that won’t be compatible with
 the old one and the new, actually usable IRL API library that also will be
 different.

 The issue this message is about is the fact that there is a strong need to
 let both CLI and API users about those changes. At the moment I can see 3
 ways of resolving it:

 1. Show deprecation warning for commands and parameters which are going to
 be different. Log deprecation warnings for deprecated library methods.
 The problem with this approach is that the structure of both CLI and the
 library will be changed, so deprecation warning will be raised for mostly
 every command for the whole release cycle. That does not look very user
 friendly, because users will have to run all commands with --quiet for the
 whole release cycle to mute deprecation warnings.

 2. Show the list o the deprecated stuff and planned changes on the first
 run. Then mute it.
 The disadvantage of this approach if that there is a need of storing the
 info about the first run to a file. However, it may be cleaned after the
 upgrade.

 3. The same as #2 but publish the warning online.

 I personally prefer #2, but I’d like to get more opinions on this topic.


 References:

 1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugins manager as a separate service

2015-03-03 Thread Dmitriy Shulyak
Hello,

On Tue, Mar 3, 2015 at 6:12 PM, Evgeniy L e...@mirantis.com wrote:

 Solution [3] is to implement plugin manager as a separate service
 and move all of the complexity there, fuelclient will be able to use
 REST API to install/delete/update/downgrade plugins.
 In the same way as it's done for OSTF.


I remember that such manager was discussed, but cant recall all details.
So, can you please provide more information on what that manager will be
doing?

If it is going to reimplement code that is added into fuel client, than i
think it should not be a separate service,
but another kind of deffered task that will be executed by orchestrator.

The only technical question which I have is how are we going to
 install the packages from the container on the host system, ssh?


If we want to add some complexity to plugin manager, and lets say install
additional packages in nailgun container, ostf
container - than ssh will not be enough, and probably we need rpc agent in
each of those containers.
But I would prefer not to mess with container state at all, and if we are
going to provide extension to nailgun/other services code
than it is a time to raise a question about going away from docker once
again.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separating granular tasks validator

2015-02-17 Thread Dmitriy Shulyak
+1 for separate tasks/graph validation library

In my opinion we may even migrate graph visualizer to this library, cause
it is most usefull during development and to demand installed fuel with
nailgun feels a bit suboptimal


On Tue, Feb 17, 2015 at 12:58 PM, Kamil Sambor ksam...@mirantis.com wrote:

 Hi all,

 I want to discuss separating validation from our repositories to one. On
 this moment in fuel we have validation for  granular deployment tasks in 3
 separate repositories so we need to maintain very similar code in all of
 them. New idea that we discussed with guys assumes to keep this code in one
 place. Below are more details.

 Schema validator should be in separate repo, we will install validator in
 fuel-plugin, fuel-lib, fuel-nailgun. Validator should support versions
 (return schemas and validate them for selected version).
 Reasons why we should have validation in all three repositories:
 nailgun: we need validation in api because we are able to send our own
 tasks to nailgun and execute them (now we validate type of tasks in
 deployment graph and  during installation of plugin)
 fuel-library: we need to check if tasks schema is correct defined in
 task.yaml files and if tasks not create cycles (actually we do both things)
 fuel-plugin: we need to check if defined tasks are supported by selected
 version of nailgun (now we check if task type are the same with hardcoded
 types in fuel-plugin, we not update this part since a while and now there
 are only 2 type of tasks: shell and puppet)
 With versioning we shouldn't have conflicts between nailgun serialization
 and fuel-plugin because plugin will be able to use schemas for specified
 version of nailgun.

 As a core reviewers of repositories we should keep the same reviewers as
 we have in fuel-core.

 How validator should looks like:
 separate repo, to install using pip
 need to return correct schema for selected version of fuel
 should be able to validate schema for selected version and ignore selected
 fields
 validate graph from selected tasks

 Pros and cons of this solution:
 pros:
 one place to keep validation
 less error prone - we will eliminate errors caused by not updating one of
 the repos, also it will be easy to test if changes are correct and
 compatible with all repos
 easier to develop (less changes in cases when we add new type of task or
 we change schemas of tasks - we edit just one place)
 easy distribution of code between repositories and easy to use by external
 developers
 cons:
 new repository that needs to be managed (and included into CI/QA/release
 cycle)
 new dependency for fuel-library, fuel-web, fuel-plugins (fuel plugin
 builder) of which developer need to be aware of

 Please comments and give opinions.

 Best regards,
 Kamil Sambor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Dmitriy Shulyak
On Mon, Feb 9, 2015 at 12:51 PM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

 Well, there are some problems with this solution:
 1. No 'pick latest one with filtering to network_verify' handler is
 available currently.


Well i think there should be finished_at field anyway, why not to add it
for this purpose?

 2. Tasks are ephemeral entities -- they get deleted here and there.
 Look at nailgun/task/manager.py for example -- lines 83-88 or lines
 108-120 and others


I dont actually recall what was the reason to delete them, but if it
happens imo it is ok to show right now
that network verification wasnt performed.

 3. Just having network verification status as ready is NOT enough.
 From the UI you can fire off network verification for unsaved changes.
 Some JSON request is made, network configuration validated by tasks
 and RPC call made returing that all is OK for example. But if you
 haven't saved your changes then in fact you haven't verified your
 current configuration, just some other one. So in this case task
 status 'ready' doesn't mean that current cluster config is valid. What
 do you propose in this case? Fail the task on purpose? I only see a

solution to this by introducting a new flag and network_check_status
 seems to be an appropriate one.


My point that it has very limited UX. Right now network check is:
- l2 with vlans verication
- dhcp verification

When we will have time we will add:
- multicast routing verification
- public gateway
Also there is more stuff that different users was asking about.

Then i know that vmware team also wants to implement pre_deployment
verifications.

So what this net_check_status will refer to at that point?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Dmitriy Shulyak
On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:

  Well i think there should be finished_at field anyway, why not to
  add it for this purpose?

 So you're suggesting to add another column and modify all tasks for
 this one feature?


Such things as time stamps should be on all tasks anyway.


  I dont actually recall what was the reason to delete them, but if
  it happens imo it is ok to show right now that network verification
  wasnt performed.

 Is this how one does predictible and easy to understand software?
 Sometimes we'll say that verification is OK, othertimes that it wasn't
 performed?

 In my opinion the questions that needs to be answered - what is the reason
or event to remove verify_networks tasks history?


  3. Just having network verification status as ready is NOT enough.
  From the UI you can fire off network verification for unsaved
  changes. Some JSON request is made, network configuration validated
  by tasks and RPC call made returing that all is OK for example. But
  if you haven't saved your changes then in fact you haven't verified
  your current configuration, just some other one. So in this case
  task status 'ready' doesn't mean that current cluster config is
  valid. What do you propose in this case? Fail the task on purpose?

 Issue #3 I described is still valid -- what is your solution in this case?

 Ok, sorry.
What do you think if in such case we will remove old tasks?
It seems to me that is correct event in which old verify_networks is
invalid anyway,
and there is no point to store history.


 As far as I understand, there's one supertask 'verify_networks'
 (called in nailgu/task/manager.py line 751). It spawns other tasks
 that do verification. When all is OK verify_networks calls RPC's
 'verify_networks_resp' method and returns a 'ready' status and at that
 point I can inject code to also set the DB column in cluster saying
 that network verification was OK for the saved configuration. Adding
 other tasks should in no way affect this behavior since they're just
 subtasks of this task -- or am I wrong?


It is not that smooth, but in general yes - it can be done when state of
verify_networks is changed.
But lets say we have some_settings_verify task? Would be it valid to add
one more field on cluster model, like some_settings_status?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-07 Thread Dmitriy Shulyak
  Also very important to understand that if task is mapped to role
 controller, but node where you want to apply that task doesn't have this
 role - it wont be executed.
 Is there a particular reason why we want to restrict a user to run an
 arbitrary task on a server, even if server doesn't have a role assigned? I
 think we should be flexible here - if I'm hacking something, I'd like to
 run arbitrary things.


The reason it is not supported is that such behaviour will require two
different endpoints, with quite similar functionality.
In most cases developer will benefit from relying on role mappings, for
instance right now one will be able to test dependent tasks on
different nodes by next commands:
 fuel node --node 1,2,3 --tasks corosync_primary corosync_slave
 fuel node --node 1,2 --tasks controller_service compute_service
IMO it is reasonable requirement for developer to ensure that task is
properly inserted into deployment configuration.

Also there was a discussion to implement an api that will bypass all
nailgun logic and will allow to communicate directly with orchestrator
hooks, like:

 fuel exec  file_with_tasks.yaml

Where file_with_tasks filled with data consumable directly by orchestrator


  fuel node --node 1,2,3 --end netconfig
 I would replace --end - --end-on, in order to show that task specified
 will run as well (to avoid ambiguity)

 This is separate question probably about CLI UX, but still - are we Ok
 with missing an action verb, like deploy? So it might be better to have,
 in my opinion:
 fuel deploy --node 1,2,3 --end netconfig


We may want to put everything that is related to deployment under one CLI
namespace, but IMO we need to be consistent and regular deploy/provision
should be migrated as well. '

 For example if one want to execute only netconfig successors:
  fuel node --node 1,2,3 --start netconfig --skip netconfig
 I would come up with shortcut for one task. To me, it would be way better
 to specify comma-separated tasks:
  fuel deploy --node 1,2,3 --task netconfig[,task2]


I dont like comma-separted notation at all, if majority will think that it
is more readable than whitespace - lets do it.

Question here: if netconfig depends on other tasks, will those be executed
 prior to netconfig? I want both options, execute with prior deps, and
 execute just one particular task.


When tasks provided with --tasks flag - no additional dependencies will be
included. Traversal will be performed only with --end and --start flags.


 As a separate note here, a few question:

1. If particular task fails to execute for some reason, what is the
error handling? Will I be able to see puppet/deployment tool exception
right in the same console, or should I check out some logs? We need to have
perfect UX for errors. Those who will be using CLI to run particular tasks,
will be dealing with errors for 95% of their time.

 There will be UI message that something with that id is failed. But
developer will need to go for logs (astute preferably).
What you are suggesting is doable, but not that trivial.. We will check how
much time this will take, and maybe there is other ways to improve
deployment feedback.


1. I'd love to have some guidance on slave node as well. For instance,
I want to run just netconfig on slave node. How can I do it?

 You mean completely bypassing fuel control plane? Developer will be able
to use underlying tools directly, puppet apply, python, ruby, whatever
that task is using.
We may add a helper, to show all tasks endpoints in single place, but they
can be found easily by usual grep..


1. If I stuck with error in task execution, which is in puppet. Can I
modify puppet module on master node, and re-run the task? (assuming that
updated module will be rsynced to slaves under deployment first)

 Rsync puppet is separate task, so one will need to execute:

 fuel node --node 1,2,3 --tasks rsync_core_puppet netconfig
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-07 Thread Dmitriy Shulyak
On Sat, Feb 7, 2015 at 9:42 AM, Andrew Woodward xar...@gmail.com wrote:

 Dmitry,
 thanks for sharing CLI options. I'd like to clarify a few things.

  Also very important to understand that if task is mapped to role
 controller, but node where you want to apply that task doesn't have this
 role - it wont be executed.
 Is there a particular reason why we want to restrict a user to run an
 arbitrary task on a server, even if server doesn't have a role assigned? I
 think we should be flexible here - if I'm hacking something, I'd like to
 run arbitrary things.

 The way I've seen this work so far is the missing role in the graph
 simply wont be executed, not the requested role


Hi Andrew,

What do you mean by requested role?
If you want to add new role to fuel, lets say redis - adding new group into
deployment configuration is mandatory, here is what it looks like [0]

Then one will need to add tasks that are required for this group (both
custom and basic tasks like hiera netconfig), lets say custom task is
install_redis.

After this is done user will be able to use cli:

 fuel node --node 5 --tasks install_redis OR --end install_redis

[0]
https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/deployment_groups/tasks.yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-07 Thread Dmitriy Shulyak
On Thu, Jan 15, 2015 at 6:20 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 I want to discuss possibility to add network verification status field for
 environments. There are 2 reasons for this:

 1) One of the most frequent reasons of deployment failure is wrong network
 configuration. In the current UI network verification is completely
 optional and sometimes users are even unaware that this feature exists. We
 can warn the user before the start of deployment if network check failed of
 wasn't performed.

 2) Currently network verification status is partially tracked by status of
 the last network verification task. Sometimes its results become stale, and
 the UI removes the task. There are a few cases when the UI does this, like
 changing network settings, adding a new node, etc (you can grep
 removeFinishedNetworkTasks to see all the cases). This definitely should
 be done on backend.



Additional field on cluster like network_check_status? When it will be
populated with result?
I think it will simply duplicate task.status with network_verify name

Network check is not a single task.. Right now there is two, and probably
we will need one more right in this release (setup public network and ping
gateway). And AFAIK there is a need for other pre deployment verifications..

I would prefer to make a separate tab with pre_deployment verifications,
similar to ostf.
But if you guys want to make smth right now, compute status of network
verification based on task with name network_verify,
if you deleted this task from UI (for some reason) just add warning that
verification wasnt performed.
If there is more than one task with network_verify for any given cluster -
pick latest one.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-06 Thread Dmitriy Shulyak
 Thank you for the excellent run-down of the CLI commands. I assume this
 will make its way into the developer documentation? I would like to know if
 you could point me to more information about the inner workings of granular
 deployment. Currently it's challenging to debug issues related to granular
 deployment.


All tasks that are in scope of role are serialized right into deployment
configuration that is consumed by astute. So it can be traced in the logs
(nailgun or astute) or in astute.yaml that is stored on node itself. Here
is what it looks like [0].
Some other internals described in spec -
https://review.openstack.org/#/c/113491/.

For developer it makes sense to get familiar with networkx data structures
[1], and then dive into debuging of [2].
But it is not an option for a day-to-day usage, and UX will be improved by
graph visualizer [3].

One more option that can improve understanding is human-readable planner..
For example it can output smth like this:

 fuel deployment plan --start hiera --end netconfig

   Manifest hiera.pp will be executed on nodes [1,2,3]
   Manifest netconfig will be executed on nodes [1,2]

But i am not sure is this thing really necessary, dependencies are trivial
in comparison to puppet, and i hope it will take very little time to
understand how things are working :)

As an example there is a bug [0] where tasks appear to be run in the wrong
 order based on which combination of roles exist in the environment.
 However, it's not clear how to determine what decides which tasks to run
 and when (is it astute, fuel-library, etc.), where the data comes from.
 etc.


As for the bug - it may be a duplicate for
https://launchpad.net/bugs/1417579, which was fixed by
https://review.openstack.org/#/c/152511/

[0] http://paste.openstack.org/show/168298/
[1]
http://networkx.github.io/documentation/latest/tutorial/tutorial.html#directed-graphs
[2]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_graph.py#L29
[3] https://review.openstack.org/#/c/152434/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-06 Thread Dmitriy Shulyak
Hello folks,

Not long ago we added necessary commands in fuel client to work with
granular deployment configuration and API.

So, you may know that configuration is stored in fuel-library, and uploaded
into database during
bootstrap of fuel master. If you want to change/add some tasks right on
master node, just add tasks.yaml
and appropriate manifests in folder for release that you are interested in.
Then apply this command:

 fuel rel --sync-deployment-tasks --dir /etc/puppet

Also you may want to overwrite deployment tasks for any specific
release/cluster by next commands:

 fuel rel --rel id --deployment-tasks --download
 fuel rel --rel id --deployment-tasks --upload

 fuel env --env id --deployment-tasks --download
 fuel env --env id --deployment-tasks --upload

After this is done - you will be able to run customized graph of tasks:

The most basic command:

 fuel node --node 1,2,3 --tasks upload_repos netconfig

Developer will need to specify nodes that should be used in deployment and
tasks ids. Order in which they are provided doesn't matter,
it will be computed from dependencies specified in database. Also very
important to understand that if task is mapped to role controller,
but node where you want to apply that task doesn't have this role - it wont
be executed.

Skipping of tasks

 fuel node --node 1,2,3 --skip netconfig hiera

List of task that are provided with this parameter will be skipped during
graph traversal in nailgun.
The main question is - should we skip other task that have provided tasks
as dependencies?
In my opinion we can leave this flag as simple as it is, and use following
commands for smarter traversal.

Specify start and end nodes in graph:

 fuel node --node 1,2,3 --end netconfig

Will deploy everything up to netconfig task, including netconfig. This is:
all tasks that we are considering as pre_deployment (keys generation, rsync
manifests, sync time, upload repos),
and such tasks as hiera setup, globals computation and maybe some other
basic preparatory tasks.

 fuel node --node 1,2,3 --start netconfig

Start from netconfig, including netconfig, deploy all other tasks, tasks
that we are considering as post_deployment.
For example if one want to execute only netconfig successors:

 fuel node --node 1,2,3 --start netconfig --skip netconfig

And user will be able to use start and end at the same time:

 fuel node --node 1,2,3 --start netconfig --end upload_cirros

Nailgun will build path that includes only necessary tasks to join this two
points. However start flag is not merged yet, but i think it will be by
Monday.

Also we are working on deployment graph visualization, it will be static (i
mean there is no progress tracking of any kind),
but it will help a lot to understand what is going to be executed.

Thank you for reading, i would like to hear more thoughts about this, and
answer any questions
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Dmitriy Shulyak
  But why to add another interface when there is one already (rest api)?

 I'm ok if we decide to use REST API, but of course there is a problem which
 we should solve, like versioning, which is much harder to support, than
 versioning
 in core-serializers. Also do you have any ideas how it can be implemented?


We need to think about deployment serializers not as part of nailgun (fuel
data inventory), but - part of another layer which uses nailgun api to
generate deployment information. Lets take ansible for example, and dynamic
inventory feature [1].
Nailgun API can be used inside of ansible dynamic inventory to generate
config that will be consumed by ansible during deployment.

Such approach will have several benefits:
- cleaner interface (ability to use ansible as main interface to control
deployment and all its features)
- deployment configuration will be tightly coupled with deployment code
- no limitation on what sources to use for configuration, and how to
compute additional values from requested data

I want to emphasize that i am not considering ansible as solution for fuel,
it serves only as example of architecture.


 You run some code which get the information from api on the master node and
 then sets the information in tasks? Or you are going to run this code on
 OpenStack
 nodes? As you mentioned in case of tokens, you should get the token right
 before
 you really need it, because of expiring problem, but in this case you don't
 need any serializers, get required token right in the task.


I think all information should be fetched before deployment.



 What is your opinion about serializing additional information in plugins
 code? How it can be done, without exposing db schema?

 With exposing the data in more abstract way the way it's done right now
 for the current deployment logic.


I mean what if plugin will want to generate additional data, like -
https://review.openstack.org/#/c/150782/? Schema will be still exposed?

[1] http://docs.ansible.com/intro_dynamic_inventory.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Dmitriy Shulyak
But without this separation on orchestration layer, we are unable to
differentiate between nodes.
What i mean is - we need to run subset of tasks on primary first and then
on all others, and we are using role as mapper
to node identities (and this mechanism was hardcoded in nailgun for a long
time).

Lets say we have task A that is mapped to primary-controller and B that is
mapped to secondary controller, task B requires task A.
If there is no primary in mapping - we will execute task A on all
controllers and then task B on all controllers.

And how in such case deployment code will know that it should not execute
commands in task A for secondary controllers and
in task B on primary ?

On Wed, Jan 28, 2015 at 10:44 AM, Sergii Golovatiuk 
sgolovat...@mirantis.com wrote:

 Hi,

 *But with introduction of plugins and granular deployment, in my opinion,
 we need to be able*
 *to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.*

 I wouldn't differentiate tasks for primary and other controllers.
 Primary-controller logic should be controlled by task itself. That will
 allow to have elegant and tiny task framework ...

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Jan 27, 2015 at 11:35 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hello all,

 You may know that for deployment configuration we are serializing
 additional prefix for controller role (primary), with the goal of
 deployment order control (primary-controller always should be deployed
 before secondaries) and some condiions in fuel-library code.

 However, we cannot guarantee that primary controller will be always the
 same node, because it is not business of nailgun to control elections of
 primary. Essentially user should not rely on nailgun
 information to find primary, but we need to persist node elected as
 primary in first deployment
 to resolve orchestration issues (when new node added to cluster we should
 not mark it as primary).

 So we called primary-controller - internal role, which means that it is
 not exposed to users (or external developers).
 But with introduction of plugins and granular deployment, in my opinion,
 we need to be able
 to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.

 Is it possible to have significantly different sets of tasks for
 controller and primary-controller?
 And same goes for mongo, and i think we had primary for swift also.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Dmitriy Shulyak


 It's not clear what problem you are going to solve with putting serializers
 alongside with deployment scripts/tasks.

I see two possible uses for specific serializer for tasks:
1. Compute additional information for deployment based not only on what is
present in astute.yaml
2. Request information from external sources based on values stored in fuel
inventory (like some token based on credentials)

For sure there is no way for this serializers to have access to the
 database,
 because with each release there will be a huge probability to get this
 serializers broken for example because of changes in database schema.
 As Dmitry mentioned in this case solution is to create another layer
 which provides stable external interface to the data.
 We already to have this interface where we support versioning and backward
 compatibility, in terms of deployment script it's astute.yaml file.

That is the problem, it is impossible to cover everything by astute.yaml.
We need to think on a way to present all data available in nailgun as
deployment configuration
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Dmitriy Shulyak
Thank you guys for quick response.
Than, if there is no better option we will follow with second approach.

On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of making
 this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
 then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Dmitriy Shulyak
 Also I would like to mention that in plugins user currently can write
 'roles': ['controller'],
 which means that the task will be applied on 'controller' and
 'primary-controller' nodes.
 Plugin developer can get this information from astute.yaml file. But I'm
 curious if we
 should change this behaviour for plugins (with backward compatibility of
 course)?


In my opinion we should make interface for task description identical for
plugins and for library,
and if this separation makes sense for library, there will be cases when it
will be expected by plugin developer
as well.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Dmitriy Shulyak
 1. as I mentioned above, we should have an interface, and if interface
 doesn't
 provide required information, you will have to fix it in two places,
 in Nailgun and in external-serializers, instead of a single place i.e.
 in Nailgun,
 another thing if astute.yaml is a bad interface and we should provide
 another
 versioned interface, or add more data into deployment serializer.

But why to add another interface when there is one already (rest api)? And
plugin developer
may query whatever he want (detailed information about volumes, interfaces,
master node settings).
It is most full source of information in fuel and it is already needs to be
protected from incompatible changes.

If our API will be not enough for general use - ofcourse we will need to
fix it, but i dont quite understand what do
you mean by - fix it in two places. API provides general information that
can be consumed by serializers (or any other service/human actually),
and if there is some issues with that information - API should be fixed.
Serializers expects that information in specific format and makes
additional transformation or computation based on that info.

What is your opinion about serializing additional information in plugins
code? How it can be done, without exposing db schema?

2. it can be handled in python or any other code (which can be wrapped into
 tasks),
 why should we implement here another entity (a.k.a external
 serializers)?

Yep, i guess this is true, i thought that we may not want to deliver
credentials to the target nodes, and only token that can be used
for limited time, but...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-28 Thread Dmitriy Shulyak
Guys, is it crazy idea to write tests for deployment state on node in
python?
It even can be done in unit tests fashion..

I mean there is no strict dependency on tool from puppet world, what is
needed is access to os and shell, maybe some utils.

 What plans have Fuel Nailgun team for testing the results of deploy steps
aka tasks?
From nailgun/orchestration point of view - verification of deployment
should be done as another task, or included in original.

On Thu, Jan 22, 2015 at 5:44 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Moreover I would suggest to use server spec as beaker is already
 duplicating part of our infrastructure automatization.

 On Thu, Jan 22, 2015 at 6:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Guys, I suggest that we create a blueprint how to integrate beaker with
 our existing infrastructure to increase test coverage. My optimistic
 estimate is that we can see its implementation in 7.0.

 On Thu, Jan 22, 2015 at 2:07 AM, Andrew Woodward xar...@gmail.com
 wrote:

 My understanding is serverspec is not going to work well / going to be
 supported. I think it was discusssed on IRC (as i cant find it in my
 email). Stackforge/puppet-ceph moved from ?(something)spec to beaker,
 as its more functional and actively developed.

 On Mon, Jan 12, 2015 at 6:10 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  Hi,
 
  Puppet OpenStack community uses Beaker for acceptance testing. I would
  consider it as option [2]
 
  [2] https://github.com/puppetlabs/beaker
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya 
 bdobre...@mirantis.com
  wrote:
 
  Hello.
 
  We are working on the modularization of Openstack deployment by puppet
  manifests in Fuel library [0].
 
  Each deploy step should be post-verified with some testing framework
 as
  well.
 
  I believe the framework should:
  * be shipped as a part of Fuel library for puppet manifests instead of
  orchestration or Nailgun backend logic;
  * allow the deployer to verify results right in-place, at the node
 being
  deployed, for example, with a rake tool;
  * be compatible / easy to integrate with the existing orchestration in
  Fuel and Mistral as an option?
 
  It looks like test resources provided by Serverspec [1] are a good
  option, what do you think?
 
  What plans have Fuel Nailgun team for testing the results of deploy
  steps aka tasks? The spec for blueprint gives no a clear answer.
 
  [0]
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  [1] http://serverspec.org/resource_types.html
 
  --
  Best regards,
  Bogdan Dobrelya,
  Skype #bogdando_at_yahoo.com
  Irc #bogdando
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] MVP implementation of Granular Deployment merged into Fuel master branch

2015-01-28 Thread Dmitriy Shulyak
Andrew,
What should be sorted out? It is unavoidable that people will comment and
ask questions during development cycle.
I am not sure that merging spec as early as possible, and than add comments
and different fixes is good strategy.
On the other hand we need to eliminate risks.. but how merging spec can
help?

On Wed, Jan 28, 2015 at 8:49 PM, Andrew Woodward xar...@gmail.com wrote:

 Vova,

 Its great to see so much progress on this, however it appears that we
 have started merging code prior to the spec landing [0] lets get it
 sorted ASAP.

 [0] https://review.openstack.org/#/c/113491/

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and operations
  architecture as well as it is going to significantly improve our testing
 and
  engineering process.
 
  Starting from now we can start merging code for:
 
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task - only
 the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and operations
  architecture as well as it is going to significantly improve our testing
 and
  engineering process.
 
  Starting from now we can start merging code for:
 
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task - only
 the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Dmitriy Shulyak
Hi folks,

I want to discuss the way we are working with generated keys for
nova/ceph/mongo and something else.

Right now we are generating keys on master itself, and then distributing
them by mcollective
transport to all nodes. As you may know we are in the process of making
this process described as
task.

There is a couple of options:
1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
then copy them with rsync task (but it feels not very secure)
2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on target
nodes. It will require additional
hook in astute, smth like copy_file, which will copy data from file on
master and put it on the node.

Also there is 3rd option to generate keys right on primary-controller and
then distribute them on all other nodes, and i guess it will be
responsibility of controller to store current keys that are valid for
cluster. Alex please provide more details about 3rd approach.

Maybe there is more options?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Dmitriy Shulyak
 not to prolong single mode, I'd like to see it die. However we will
 need to be able to add, change, remove, or noop portions of the tasks
 graph in the future. Many of the plugins that cant currently be built
 would rely on being able to sub out parts of the graph. How is that
 going to factor into granular deployments?


There is several ways to achieve noop task:

1. By condition on task itself (same expression parser that is used for UI
validation).
Right now we are able to add condtion like, cluster:mode != multinode,
but the problem is additional complexity to support different chains of
tasks, and additional refactoring in library.
2. Skip particular task in deployment API call

As for plugins and add/stubout/change - all of this is possible, there is
no plugins API for that stuff,
and we will need to think what exactly we want to expose, but from granular
deployment perspective
it is just a matter of changing data for particular task in graph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Dmitriy Shulyak
On Thu, Jan 22, 2015 at 7:59 PM, Evgeniy L e...@mirantis.com wrote:

 The problem with merging is usually it's not clear how system performs
 merging.
 For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
 3}]}, and I want
 {'list': [{'k': 4}]} to be merged, what system should do? Replace the list
 or add {'k': 4}?
 Both cases should be covered.

 What if we will replace based on root level? It feels enough for me.

Most of the users don't remember all of the keys, usually user gets the
 defaults, and
 changes some values in place, in this case we should ask user to remove
 the rest
 of the fields.

 And we are not going to force them delete something - if all information
is present than it is what user actually wants.

The only solution which I see is to separate the data from the graph, not
 to send
 this information to user.

Probably, i will follow same approach that is used for repo generation,
mainly because it is quite usefull for debuging - to see
how tasks are generated, but it doesnt solves two additional points:
1. There is constantly some data in nailgun becomes invalid just because we
are asking user to overwrite everything
(most common case is allocated ip addresses)
2. What if you only need to add some data, like in fencing plugin? It will
mean that such cluster is not going to be supportable,
what if we will want to upgrade that cluster and new serializer should be
used? i think there is even warning on UI.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Dmitriy Shulyak
On Tue, Jan 27, 2015 at 10:47 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 This is an interesting topic. As per our discussions earlier, I suggest
 that in the future we move to different serializers for each granule of our
 deployment, so that we do not need to drag a lot of senseless data into
 particular task being executed. Say, we have a fencing task, which has a
 serializer module written in python. This module is imported by Nailgun and
 what it actually does, it executes specific Nailgun core methods that
 access database or other sources of information and retrieve data in the
 way this task wants it instead of adjusting the task to the only
 'astute.yaml'.


I like this idea, and to make things easier we may provide read only access
for plugins, but i am not sure that everyone will agree
to expose database to distributed task serializers. It may be quite fragile
and we wont be able to change anything internally, consider
refactoring of volumes or networks.

On the other hand if we will be able to make single public interface for
inventory (this is how i am calling part of nailgun that is reponsible
for cluster information storage) and use that interface (through REST Api
??) in component that will be responsible for deployment serialization and
execution.

Basically, what i am saying is that we need to split nailgun to
microservices, and then reuse that api in plugins or in config generators
right in library.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-27 Thread Dmitriy Shulyak
Hello all,

You may know that for deployment configuration we are serializing
additional prefix for controller role (primary), with the goal of
deployment order control (primary-controller always should be deployed
before secondaries) and some condiions in fuel-library code.

However, we cannot guarantee that primary controller will be always the
same node, because it is not business of nailgun to control elections of
primary. Essentially user should not rely on nailgun
information to find primary, but we need to persist node elected as primary
in first deployment
to resolve orchestration issues (when new node added to cluster we should
not mark it as primary).

So we called primary-controller - internal role, which means that it is
not exposed to users (or external developers).
But with introduction of plugins and granular deployment, in my opinion, we
need to be able
to specify that task should run specifically on primary, or on secondaries.
Alternative to this approach would be - always run task on all controllers,
and let task itself to verify that it is  executed on primary or not.

Is it possible to have significantly different sets of tasks for controller
and primary-controller?
And same goes for mongo, and i think we had primary for swift also.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-22 Thread Dmitriy Shulyak
Hi guys,

I want to discuss the way we are working with deployment configuration that
were redefined for cluster.

In case it was redefined by API - we are using that information instead of
generated.
With one exception, we will generate new repo sources and path to manifest
if we are using update (patching feature in 6.0).

Starting from 6.1 this configuration will be populated by tasks, which is a
part of granular deployment
workflow and replacement of configuration will lead to inability to use
partial graph execution API.
Ofcourse it is possible to hack around and make it work, but imo we need
generic solution.

Next problem - if user will upload replaced information, changes on cluster
attributes, or networks, wont be reflected in deployment anymore and it
constantly leads to problems for deployment engineers that are using fuel.

What if user want to add data, and use generated of networks, attributes,
etc?
- it may be required as a part of manual plugin installation (ha_fencing
requires a lot of configuration to be added into astute.yaml),
- or you need to substitute networking data, e.g add specific parameters
for linux bridges

So given all this, i think that we should not substitute all information,
but only part that is present in
redefined info, and if there is additional parameters they will be simply
merged into generated info
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mirantis Openstack 5.1 environment issues

2015-01-08 Thread Dmitriy Shulyak
 1)  Verify network got failed with message Expected VLAN (not
 received) untagged at the interface Eth1 of controller and compute nodes.

 In our set-up Eth1 is connected to the public network, which we disconnect
 from public network while doing deployment operation as FUEL itself works
 as DHCP server. We want know that is this a known issue in Fuel or from our
 side, as we followed this prerequisite before doing verify network
 operation.

 The fact of error is correct - no received traffic on eth1. But what is
expected behaviour from your point of view?

 2)  Eth1 interface in the Fuel UI is showing as down even after
 connecting back cables into the nodes.

 Before doing openstack deployment  from Fuel node, we disconnected eth1
 from controller and compute nodes as it is connected to public network.
 Deployment was successful and then we connected back the Eth1 of all
 controller/compute nodes.  We are seeing an issue that eth1 displaying as
 down in FUEL UI, even though we connect back eth1 interface and we are able
 to ping to public network.

Probably we disabled interface information update, after node is deployed,
and imho we need to open bug for this issue
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes

2014-12-01 Thread Dmitriy Shulyak
Swagger is not related to test improvement, but we started to discuss it
here so..

@Przemyslaw, how hard it will be to integrate it with nailgun rest api
(web.py and handlers hierarchy)?
Also is there any way to use auth with swagger?

On Mon, Dec 1, 2014 at 1:14 PM, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:


 On 11/28/2014 05:15 PM, Ivan Kliuk wrote:

 Hi, team!

 Let me please present ideas collected during the unit tests improvement
 meeting:
 1) Rename class ``Environment`` to something more descriptive
 2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. Let's
 use parameters instead
 3) run_tests.sh should invoke alternate syncdb() for cases where we don't
 need to test migration procedure, i.e. create_db_schema()
 4) Consider usage of custom fixture provider. The main functionality
 should combine loading from YAML/JSON source and support fixture inheritance
 5) The project needs in a document(policy) which describes:
 - Tests creation technique;
 - Test categorization (integration/unit) and approaches of testing
 different code base
 -
 6) Review the tests and refactor unit tests as described in the test policy
 7) Mimic Nailgun module structure in unit tests
 8) Explore Swagger tool http://swagger.io/


 Swagger is a great tool, we used it in my previous job. We used Tornado,
 attached some hand-crafted code to RequestHandler class so that it
 inspected all its subclasses (i.e. different endpoint with REST methods),
 generated swagger file and presented the Swagger UI (
 https://github.com/swagger-api/swagger-ui) under some /docs/ URL.
 What this gave us is that we could just add YAML specification directly to
 the docstring of the handler method and it would automatically appear in
 the UI. It's worth noting that the UI provides an interactive form for
 sending requests to the API so that tinkering with the API is easy [1].

 [1]
 https://www.dropbox.com/s/y0nuxull9mxm5nm/Swagger%20UI%202014-12-01%2012-13-06.png?dl=0

 P.

  --
 Sincerely yours,
 Ivan Kliuk



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-28 Thread Dmitriy Shulyak


- environment_config.yaml should contain exact config which will be
mixed into cluster_attributes. No need to implicitly generate any controls
like it is done now.

  Initially i had the same thoughts and wanted to use it the way it is, but
now i completely agree with Evgeniy that additional DSL will cause a lot
of problems with compatibility between versions and developer experience.
We need to search for alternatives..
1. for UI i would prefer separate tab for plugins, where user will be able
to enable/disable plugin explicitly.
Currently settings tab is overloaded.
2. on backend we need to validate plugins against certain env before
enabling it,
   and for simple case we may expose some basic entities like network_mode.
For case where you need complex logic - python code is far more flexible
that new DSL.


- metadata.yaml should also contain is_removable field. This field
is needed to determine whether it is possible to remove installed plugin.
It is impossible to remove plugins in the current implementation. This
field should contain an expression written in our DSL which we already use
in a few places. The LBaaS plugin also uses it to hide the checkbox if
Neutron is not used, so even simple plugins like this need to utilize it.
This field can also be autogenerated, for more complex plugins plugin
writer needs to fix it manually. For example, for Ceph it could look like
settings:storage.volumes_ceph.value == false and
settings:storage.images_ceph.value == false.

 How checkbox will help? There is several cases of plugin removal..
1. Plugin is installed, but not enabled for any env - just remove the plugin
2. Plugin is installed, enabled and cluster deployed - forget about it for
now..
3. Plugin is installed and only enabled - we need to maintain state of db
consistent after plugin is removed, it is problematic, but possible
My main point that plugin is enabled/disabled explicitly by user, after
that we can decide ourselves can it be removed or not.


- For every task in tasks.yaml there should be added new condition
field with an expression which determines whether the task should be run.
In the current implementation tasks are always run for specified roles. For
example, vCenter plugin can have a few tasks with conditions like
settings:common.libvirt_type.value == 'vcenter' or
settings:storage.volumes_vmdk.value == true. Also, AFAIU, similar
approach will be used in implementation of Granular Deployment feature.

 I had some thoughts about using DSL, it seemed to me especially helpfull
when you need to disable part of embedded into core functionality,
like deploying with another hypervisor, or network dirver (contrail for
example). And DSL wont cover all cases here, this quite similar to
metadata.yaml, simple cases can be covered by some variables in tasks (like
group, unique, etc), but complex is easier to test and describe in python.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-27 Thread Dmitriy Shulyak
Is it possible to send http requests from monit, e.g for creating
notifications?
I scanned through the docs and found only alerts for sending mail,
also where token (username/pass) for monit will be stored?

Or maybe there is another plan? without any api interaction

On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

  This I didn't know. It's true in fact, I checked the manifests. Though
 monit is not deployed yet because of lack of packages in Fuel ISO. Anyways,
 I think the argument about using yet another monitoring service is now
 rendered invalid.

 So +1 for monit? :)

 P.


 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

 Monit is easy and is used to control states of Compute nodes. We can adopt
 it for master node.

  --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 As for me - zabbix is overkill for one node. Zabbix Server + Agent +
 Frontend + DB + HTTP server, and all of it for one node? Why not use
 something that was developed for monitoring one node, doesn't have many
 deps and work out of the box? Not necessarily Monit, but something similar.

 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's used
 already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2014-11-26 Thread Dmitriy Shulyak
 Im working on Zabbix implementation which include HA support.

 Zabbix server should be deployed on all controllers in HA mode.

But zabbix-server will stay and user will be able to assign this role where
he wants?
If so there will be no limitations on roles allocation strategy that user
can use for cluster



Currently we have dedicated role 'zabbix-server', which does not support
 more
 than one zabbix-server. Instead of this we will move monitoring solution
 (zabbix),
 as an additional component.

 We will introduce additional role 'zabbix-monitoring', assigned to all
 servers with
 lowest priority in serializer (run puppet after every other roles) when
 zabbix is
 enabled.
 'Zabbix-monitoring' role will be assigned automatically

It must not be in orchestrator (i guess you are talking about serializer)
by some cluster attribute or another hack.
I thought about such kind of role placement during granular deployment
design, and it can be done in a next way:

Zabbix-monitoring (i like zabbix-agent more) to all servers if
zabbix-server is added to cluster,
and then operator should be able to remove zabbix-monitoring from some
nodes. But more importantly he will be able
to see roles to nodes placement in a very explicit manner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Dmitriy Shulyak
I tried to reproduce this behavior with tasks.yaml:

# Deployment is required for controllers
- role: ['controller']
  stage: post_deployment
  type: puppet
  parameters:
puppet_manifest: site.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 360

And actually plugin was built successfully, so as Tatyana and Alex said -
the problem is not with puppet_modules format.

I would sugest to update fuel-plugin-builder, and if this issue will be
reproduced - you can show your plugin on gerrit review or personal github,
and we can try to build it.


On Mon, Nov 24, 2014 at 1:05 PM, Tatyana Leontovich 
tleontov...@mirantis.com wrote:

 Guys,
 task like
 - role: ['controller']
 stage: post_deployment
 type: puppet
 parameters:
 puppet_manifest: puppet/site.pp
 puppet_modules: puppet/modules/
 timeout: 360
 works fine for me, so  I believe your task should looks like

 cat tasks.yaml
 # This tasks will be applied on controller nodes,
 # here you can also specify several roles, for example
 # ['cinder', 'compute'] will be applied only on
 # cinder and compute nodes
 - role: ['controller']
   stage: post_deployment
   type: puppet
   parameters:
 puppet_manifest: install_keystone_ldap.pp
 puppet_modules: /etc/puppet/modules/

 And be sure that install_keystone_ldap.pp thos one invoke other manifests

 Best,
 Tatyana

 On Mon, Nov 24, 2014 at 12:49 PM, Dmitry Ukov du...@mirantis.com wrote:

 Unfortunately this does not work

 cat tasks.yaml
 # This tasks will be applied on controller nodes,
 # here you can also specify several roles, for example
 # ['cinder', 'compute'] will be applied only on
 # cinder and compute nodes
 - role: ['controller']
   stage: post_deployment
   type: puppet
   parameters:
 puppet_manifest: install_keystone_ldap.pp
 puppet_modules: puppet/:/etc/puppet/modules/


 fpb --build .
 /home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
 UserWarning: /home/dukov/.python-eggs is writable by group/others and
 vulnerable to attack when used with get_resource_filename. Consider a more
 secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
 environment variable).
   warnings.warn(msg, UserWarning)
 2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format 0 -
 parameters, for file ./tasks.yaml, {'puppet_modules':
 'puppet/:/etc/puppet/modules/', 'puppet_manifest':
 'install_keystone_ldap.pp'} is not valid under any of the given schemas
 Traceback (most recent call last):
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py,
 line 90, in main
 perform_action(args)
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py,
 line 77, in perform_action
 actions.BuildPlugin(args.build).run()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py,
 line 42, in run
 self.check()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py,
 line 99, in check
 self._check_structure()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py,
 line 111, in _check_structure
 ValidatorManager(self.plugin_path).get_validator().validate()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py,
 line 39, in validate
 self.check_schemas()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py,
 line 46, in check_schemas
 self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py,
 line 47, in validate_file_by_schema
 self.validate_schema(data, schema, path)
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py,
 line 43, in validate_schema
 value_path, path, exc.message))
 ValidationError: Wrong value format 0 - parameters, for file
 ./tasks.yaml, {'puppet_modules': 'puppet/:/etc/puppet/modules/',
 'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
 the given schemas


 On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko adide...@mirantis.com
  wrote:

 Hi,

 according to [1] you should be able to use:

 puppet_modules: puppet/:/etc/puppet/modules/

 This is valid string yaml parameter that should be parsed just fine.

 [1]
 https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62

 Regards
 --
 Alex


 On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov du...@mirantis.com
 wrote:

 Hello All,
 Current implementation of plugins in Fuel unpacks plugin tarball
 into /var/www/nailgun/plugins/.
 If we implement deployment part of plugin using puppet there is a
 setting
 puppet_modules:

 This setting should specify path to modules folder. As soon as main
 deployment part of plugin is implemented as a 

Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-21 Thread Dmitriy Shulyak
  I have nothing against using some 3rd party service. But I thought this
 was to be small -- disk monitoring only  notifying the user, not stats
 collecting. That's why I added the code to Fuel codebase. If you want
 external service you need to remember about such details as, say, duplicate
 settings (database credentials at least) and I thought this was an overkill
 for such simple functionality.


Yes, it will be much more complex than simple daemon that creates
notifications but our application is operating in isolated containers, and
most of the resources cant be discovered from any particular container. So
if we will want to extend it, with another task, like monitoring pool of
dhcp addresses - we will end up with some kindof server-agent architecture,
and this is a lot of work to do

Also, for a 3rd party service, notification injecting code still needs to
 be written as a plugin -- that's why I also don't think Ruby is a good idea
 :)

 Afaik there is a way to write python plugins for sensu, but if there is
monitoring app  in python, that have friendly support for extensions, I am
+1 for python


 So in the end I don't know if we'll have that much less code with a 3rd
 party service. But if you want a statistics collector then maybe it's OK.

 I think that monitoring application is fits there, and we kindof already
introducing our whell for collecting
statistic from openstack. I would like to know what guys who was working on
stats in 6.0 thinking about it. So it is TBD
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Order of network interfaces for bootstrap nodes

2014-11-20 Thread Dmitriy Shulyak
Hi folks,

There was interesting research today on random nics ordering for nodes in
bootstrap stage. And in my opinion it requires separate thread...
I will try to describe what the problem is and several ways to solve it.
Maybe i am missing the simple way, if you see it - please participate.
Link to LP bug: https://bugs.launchpad.net/fuel/+bug/1394466

When a node is booted first time it registers its interfaces in nailgun,
see sample of data (only related to discussion parts):
- name: eth0
  ip: 10.0.0.3/24
  mac: 00:00:03
- name: eth1
  ip: None
  mac: 00:00:04
* eth0 is admin network interface which was used for initial pxe boot

We have networks, for simplicity lets assume there is 2:
 - admin
 - public
When the node is added to cluster, in general you will see next schema:
- name: eth0
  ip: 10.0.0.3/24
  mac: 00:00:03
  networks:
- admin
- public
- name: eth1
  ip: None
  mac: 00:00:04

At this stage node is still using default system with bootstrap profile, so
there is no custom system with udev rules. And on next reboot there is no
way to guarantee that network cards will be discovered by kernel in same
order. If network cards is discovered in order that is diffrent from
original and nics configuration is updated, it is possible to end up with:
- name: eth0
  ip: None
  mac: 00:00:04
  networks:
- admin
- public
- name: eth1
  mac: 00:00:03
  ip: 10.0.0.3/24
Here you can see that networks is left connected to eth0 (in db). And
ofcourse this schema doesnt reflect physical infrastructure. I hope it is
clear now what is the problem.
If you want to investigate it yourself, please find db dump in snapshot
attached to the bug, you will be able to find described here case.
What happens next:
1. netcfg/choose_interface for kernel is misconfigured, and in my example
it will be 00:00:04, but should be 00:00:03
2. network configuration for l23network will be simply corrupted

So - possible solutions:
1. Reflect node interfaces ordering, with networks reassignment - Hard and
hackish
2. Do not update any interfaces info if networks assigned to them, then
udev rules will be applied and nics will be reordered into original state -
i would say easy and reliable solution
3. Create cobbler system when node is booted first time, and add udev rules
- it looks to me like proper solution, but requires design

Please share your thoughts/ideas, afaik this issue is not rare on scale
deployments.
Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-20 Thread Dmitriy Shulyak
Guys, maybe we can use existing software, for example Sensu [1]?
Maybe i am wrong, but i dont like the idea to start writing our small
monitoring applications..
Also something well designed and extendable can be reused for statistic
collector


1. https://github.com/sensu

On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:


 On 06 Nov 2014, at 12:20, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

  I didn't mean a robust monitoring system, just something simpler.
 Notifications is a good idea for FuelWeb.

 I’m all for that, but if we add it, we need to document ways to clean up
 space.
 We could also add some kind of simple job to remove rotated logs, obsolete
 spanshots, etc., but this is out of scope for 6.0 I guess.

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Order of network interfaces for bootstrap nodes

2014-11-20 Thread Dmitriy Shulyak


 When the interfaces are updated with data from the agent we attempt to
 match the MAC to an existing interface (
 https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/network/manager.py#L682-L690).
 If that doesn't work we attempt to match by name. Looking at the data that
 comes from the agent the MAC is always capitalized while in the database
 it's lower-case. It seems like checking the MAC will fail and we'll fall
 through to matching by name.


 Thank you! I think it is correct, and I made the problem more complicated
than it is ))
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Test runner for python tests and parallel execution

2014-11-07 Thread Dmitriy Shulyak
Hi guys,
Long time ago i've made patch [1] which added tests distribution between
processes and databases.  It was simple py.test configuration which allows
us to reduce time of test execution almost linearly, on my local machine
one test run (distributed over 4 cores) takes 250 seconds.

At that time idea of using py.test was discarded, because:
1. it is neither nosetests
2. nor openstack community way (testrepository)

There is plugin for nosetests which adds multiprocessing support (maybe it
is even included in default distribution) but i wasnt able to find a normal
way of distribution over databases, just because runner doesnot include
necessery config options like RUNNER_ID. I cant stop you
from trying - so please share your results, if you will find a nice and
easy way to make it work.

As for testrepository - if you have positive experience using this tool,
share them, from my point of view it has very bad UX.

Please consider trying py.test [2], i bet you will notice difference in
reporting, and maybe will use it yourself for day-to-day test executions.
Additionally there is very good
system for parametrizing tests and writing extensions.

The goal of this letter is to solve problem of CI queues for fuel-web
project, so please
share your opinions, It will be nice to solve this at the start of next
week.

[1] https://review.openstack.org/#/c/82284/3/nailgun/conftest.py
[2] http://pytest.readthedocs.org/en/2.1.0/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Power management in Cobbler

2014-11-04 Thread Dmitriy Shulyak
Not long time ago we discussed necessity of power management feature in
Fuel.

What is your opinion on power management support in Cobbler, i took a look
at documentation [1] and templates [2] that  we have right now.
And it actually looks like we can make use of it.

The only issue is that power address that cobbler system is configured with
is wrong.
Because provisioning serializer uses one reported by boostrap, but it can
be easily fixed.

Ofcourse another question is separate network for power management, but we
can leave with
admin for now.

Please share your opinions on this matter. Thanks

[1] http://www.cobblerd.org/manuals/2.6.0/4/5_-_Power_Management.html
[2] http://paste.openstack.org/show/129063/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] About deployment progress calculation

2014-10-28 Thread Dmitriy Shulyak
Hello everyone,

I want to raise concerns about progress bar, and its usability.
In my opinion current approach has several downsides:
1. No valuable information
2. Very fragile, you need to change code in several places not to break it
3. Will not work with plugable code

Log parsing works under one basic assumption - that we are in control of
all tasks,
so we can use mappings to logs with certain pattern.
 It wont work with plugable architecture, and i am talking not about
fuel-plugins, and the
way it will be done in 6.0, but the whole idea of plugable architecture,
and i assume that internal features will be implemented as granular
self-contained plugins,
and it will be possible to accomplish not only with puppet, but with any
other tool that suits you.
Asking person who will provide plugin (extension) to add mappings to logs -
feels like weirdeist thing ever.

*What can be done to improve usability of progress calculation?*
I see here several requirements:
1.Provide valuable information
  - Correct representation of time that task takes to run
  - What is going on target node in any point of the deployment?
2. Plugin friendly, it means that approach we will take should be flexible
and extendable

*Implementation:*
In nearest future deployment will be splitted into tasks, they are will be
big, not granular
(like deploy controller, deploy compute), but this does not matter, because
we can start to estimate them.
Each task will provide estimated time.
At first it will be manually setted by person who develops plugin (tasks),
but it can be improved,
so this information automatically (or semi-auto) will be provided by
fuel-stats application.
It will require orchestrator to report 2 simple entities:
- time delta of the task
- task identity
UI will be able to show percents anyway, but additionally it will show what
is running on target node.

Ofcourse it is not about 6.0, but please take a look, and lets try to agree
on what is right way to solve this task, because log parsing will not work
with data-driven
orchestrator and plugable architecture.
Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel standards

2014-10-28 Thread Dmitriy Shulyak

 Let's do the same for Fuel. Frankly, I'd say we could take OpenStack
 standards as is and use them for Fuel. But maybe there are other opinions.
 Let's discuss this and decide what to do. Do we actually need those
 standards at all?

 Agree that we can take openstack standarts as example, but lets not simply
copy them and just live with it.


 0) Standard for projects naming.
 Currently most of Fuel projects are named like fuel-whatever or even
 whatever? Is it ok? Or maybe we need some formal rules for naming. For
 example, all OpenStack clients are named python-someclient. Do we need to
 rename fuelclient into python-fuelclient?

I dont like that fuel is added into every project that we start, correct me
if I am wrong but:
- shotgun can be self-contained project, and still provide certain value,
actually i think it can be used by jenkins in our and openstack gates
  to copy logs and other info
- same for network verification tool
- fuel_agent (image based provisioning) can work without all other fuel
parts


 1) Standard for an architecture.
 Most of OpenStack services are split into several independent parts
 (raughly service-api, serivce-engine, python-serivceclient) and those parts
 interact with each other via REST and AMQP. python-serivceclient is usually
 located in a separate repository. Do we actually need to do the same for
 Fuel? According to fuelclient it means it should be moved into a separate
 repository. Fortunately, it already uses REST API for interacting with
 nailgun. But it should be possible to use it not only as a CLI tool, but
 also as a library.

 2) Standard for project directory structure (directory names for api, db
 models,  drivers, cli related code, plugins, common code, etc.)
 Do we actually need to standardize a directory structure?

 Well, we need some project, agree on that project structure and then just
provide as example during review.
We can choose:
- fuelclient as cli example (but first refactor it)
- fuel-stats as web app example

 3) Standard for third party libraries
 As far as Fuel is a deployment tool for OpenStack, let's make a decision
 about using OpenStack components wherever it is possible.
 3.1) oslo.config for configuring.
 3.2) oslo.db for database layer
 3.3) oslo.messaging for AMQP layer
 3.4) cliff for CLI (should we refactor fuelclient so as to make based on
 cliff?)
 3.5) oslo.log for logging
 3.6) stevedore for plugins
 etc.
 What about third party components which are not OpenStack related? What
 could be the requirements for an arbitrary PyPi package?

In my opinion we should not pick some library just because it is used in
openstack, there should be some research and analys,
for example:
Cli application, there is several popular alternatives to cliff in python
community:
- https://github.com/docopt/docopt
- https://github.com/mitsuhiko/click
I personnaly would prefer to use docopt, but click looks good as well.
Web frameworks is whole different story, in python community we have mature
flask and pyramid,
and i dont see any benefits from using pecan.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Generic descriptive format for deployment tasks

2014-10-10 Thread Dmitriy Shulyak
Hi team,
After several discussions i want to propose generic format
for describing deployment tasks, this format is expected to cover
all tasks (e.g pre-deployment and post-deployment), also it should cover
different actions like upgrade/patching

action: upload_file
id: upload_astute
roles: *
parameters:
input: $.data - this is internal mistral thing
timeout: 50

action: tasklib
id: generate_keys
stages: [pre-deployment]
roles: master
parameters:
timeout: 60
command: generate/keys
type: puppet
manifest: /etc/puppet/manifests/key_generator.pp

action: tasklib
id: rsync_puppet
stages: [pre-node]
requires: [upload_astute]
parameters:
timeout: 100
command: rsync/stuff
type: shell
cmd: python rsync.py

action: tasklib
id: ceph
roles: [ceph-osd, ceph-mon]
requires: [rsync_puppet]
parameters:
timeout: 100
command: deployment/ceph
type: puppet
manifest: /etc/puppet/manifests/ceph.pp

action: tasklib
id: glance/image
roles: [controller, primary-controller]
stages: [post-deployment]
parameters:
timeout: 100
command: deployment/glance/image
type: shell
cmd: python upload_image.py

Let me provide some clarifications:
1. As example, we want to generate keys strictly before deployment, and the
1st way to solve
it is to introduce concept of stages, e.g pre-deployment, main, upgrade,
post-deployment.
Another one would be to use virtual roles and/or virtual tasks like
deployment_started, deployment_ended.
We need to consider both, but stages it is what we are using now, and i am
just trying to generalize and make it data-driven.
2. Another internal thing is roles. As you can see in this configs there is
2
specific keywords for roles:
* - means task should be executed on all roles
master - task should be only executed on fuel master node
All other roles should be entities from fuel. If you know other exceptions
- lets discuss them.

I would like to ask team for 2 things:
1. Examine approach and ask questions about any specific tasks, in order to
test
this approach for sanity.
2. If you think that some of the keywords in configuration is not
appropriate, lets discuss it. For example we can not agree on term stage,
because it is too widely used and basically we need another one.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cinder/Neutron plugins on UI

2014-10-08 Thread Dmitriy Shulyak
If there is no checkboxes (read configuration) and plugin is installed -
all deployment tasks will be applied
to every environment, but why do you think that there will be no checkboxes
in most cases?

Right now we already have like 2 types of plugins (extensions), classified
by usage of settings tab:
1. Some kind of backend for service (swift/ceph, lvm/ceph, ovs/nsx), or
hypervisor (lvm/qemu/vmware)
2. Self-contained service that just needs to be installed (sahara, murano,
zabbix)

In 1st case you need to provide shared configuration storage (like cluster
attributes right now), in order for plugin
to be able to exclude part of core workflow from running (not installing
swift for example).
In case if the plugin is self-contained entity, like Sahara, Murano right
now - checkboxes would be simply required.
It works this way right now, and it doesnot look like huge overhead.

So what do you think, will it work or no?

On Wed, Oct 8, 2014 at 8:42 AM, Nikolay Markov nmar...@mirantis.com wrote:

 Hi,

 Frankly speaking, I'm not sure on how 1st approach will even work.
 What if plugin doesn't provide any checkboxes (and in most cases it
 won't)? How should we determine in serializer, which plugins should be
 applied while generating astute.yaml and tasks.yaml? Should we
 autogenerate some stuff for plugins which are not even enabled and do
 needless work?

 This looks too complicated for me from the backend side, and option
 with enabling/disabling plugins in wizard for specific environment (we
 can invent mechanism to disable them on env which is not deployed yet,
 besides, for API it's just one PUT) is MUCH simpler and much more
 obvious, as I see.



 On Wed, Oct 8, 2014 at 8:34 AM, Vitaly Kramskikh
 vkramsk...@mirantis.com wrote:
  Hi,
 
  I would go with the 1st approach. The thing I don't like in the 2nd
 approach
  is that we have to make the user enable plugin twice. For example, we
 have
  to enable Ceph as a plugin and then add Ceph role to nodes and choose
 what
  we want to store in Ceph (images, objects). Why we would need to
 explicitly
  enable Ceph plugin? Let's always show plugin options in wizard and
 settings
  tab, and if the user just doesn't want to enable Ceph, he would just
 leave
  all the checkboxes unchecked. The 2nd approach would also lead to some
 kind
  of inconsistency in case the user enabled Ceph plugin but left all the
  Ceph-related checkboxes unchecked and didn't add Ceph nodes.
 
  2014-10-07 21:17 GMT+07:00 Evgeniy L e...@mirantis.com:
 
  Hi,
 
  We had a meeting today about plugins on UI, as result of the meeting
  we have two approaches and this approaches affect not only UX but
  plugins itself.
 
  1st - disable/enable plugin on settings tab
 
  user installs the plugin
  creates a cluster
  configures and enables/disables plugins on settings tab
 
  For user it will look like Ceph plugin checkboxes on settings tab,
  if he enables checkbox, then we pass the parameter to orchestrator
  as `true`.
 
  Cons:
 
  plugin developer should define a checkbox in each plugin (for plugin
  disabling/enabling)
  on the backend we have to enable all of the plugins for environment,
  because user can define any name for his checkbox and we won't be able
 to
  find it and make appropriate mapping plugin - env
  since all of the plugins are always enabled we have to run tasks for
 all
  of the plugins, and each plugin should parse astute.yaml in order to
 figure
  out if it's required to run task current script
 
  Pros:
 
  it won't require additional setting or step for wizard
  user will be able to disable plugin after environment creation
 
  2nd - enable plugins in wizard
 
  user installs the plugin
  now he can choose specific plugins for his environment in wizard
  after cluster is created, he can configure additional parameters on
  settings tab, if plugin provides any
 
  Cons:
 
  user won't be able to disable plugin after cluster is created
  additional step or configuration subcategory in wizard
 
  Pros:
 
  On backend we always know which plugin is disabled and which is enabled.
 
  it means we don't provide settings for plugins which are disabled
  we don't run tasks on slaves if it's not required
 
  Thanks,
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Best regards,
 Nick Markov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [Fuel] Cluster reconfiguration scenarios

2014-10-07 Thread Dmitriy Shulyak
Hi folks,
I want to discuss cluster reconfiguration scenarios, i am aware of 2 such
bugs:

- ceph-mon not installed on controllers if cluster initially was deployed
without ceph-osd
- config with rabbitmq hosts not updated on non-controlles nodes after
additional controllers is added to cluster [1]

In both cases we need to track node state and change it accordingly to some
event
(additonal ceph-osd, additional controller added to cluster, etc..).
I think that it is generic scenario and our api should support such
modifications.

To track state of node we need to introduce new state - something in lines
of requires_update.
And extend deployment selection logic to include nodes with this state, if
deploy action will be invoked.

What do you think about such feature? I would be grateful for any other
cases.

[1] https://bugs.launchpad.net/fuel/+bug/1368445
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster reconfiguration scenarios

2014-10-07 Thread Dmitriy Shulyak
We are definitely able to parse all this information at the time of
deployment and generate deployment info
accordingly, but my idea was that additional status will provide more
visibility for operator.
It just wont be obivious - you added controllers/ceph-osd and suddenly your
computes/controllers are in deployment mode.

On Tue, Oct 7, 2014 at 6:17 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 I'm not sure if we should add the new state in this case, it looks like
 you can get this
 information dynamically, you already have the state of env which tells you
 that
 there are new ceph nodes, and there are no ready ceph nodes in the cluster
 hence you should install ceph-mon on the controllers.

 The same for rabbitmq, if there is new controller, run rabbit
 reconfiguration on
 non-controllers nodes.

 Thanks,

 On Tue, Oct 7, 2014 at 6:14 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,
 I want to discuss cluster reconfiguration scenarios, i am aware of 2 such
 bugs:

 - ceph-mon not installed on controllers if cluster initially was deployed
 without ceph-osd
 - config with rabbitmq hosts not updated on non-controlles nodes after
 additional controllers is added to cluster [1]

 In both cases we need to track node state and change it accordingly to
 some event
 (additonal ceph-osd, additional controller added to cluster, etc..).
 I think that it is generic scenario and our api should support such
 modifications.

 To track state of node we need to introduce new state - something in
 lines of requires_update.
 And extend deployment selection logic to include nodes with this state,
 if deploy action will be invoked.

 What do you think about such feature? I would be grateful for any other
 cases.

 [1] https://bugs.launchpad.net/fuel/+bug/1368445


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [fuel] Executor task affinity

2014-10-02 Thread Dmitriy Shulyak
Hi,

As i understood you want to store some mappings of tags to hosts in
database, but then you need to sort out api
for registering hosts and/or discovery mechanism for such hosts. It is
quite complex.
It maybe be usefull, in my opinion it would be better to have simpler/more
flexible variant.

For example:

1. Provide targets in workbook description, like:

task:
  targets: [nova, cinder, etc]

2. Get targets from execution contexts by using yaql:

task:
  targets: $.uids

task:
  targets: [$.role, $.uid]

In this case all simple relations will be covered by amqp routing
configuration
What do you think about such approach?

On Thu, Oct 2, 2014 at 11:35 AM, Nikolay Makhotkin nmakhot...@mirantis.com
wrote:

 Hi, folks!

 I drafted the document where we can see how task affinity will be applied
 to Mistral:


 https://docs.google.com/a/mirantis.com/document/d/17O51J1822G9KY_Fkn66Ul2fc56yt9T4NunnSgmaehmg/edit

 --
 Best Regards,
 Nikolay
 @Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Plugable solution for running abstract commands on nodes

2014-09-10 Thread Dmitriy Shulyak
Hi folks,

Some of you may know that there is ongoing work to achieve kindof
data-driven orchestration
for Fuel. If this is new to you, please get familiar with spec:

https://review.openstack.org/#/c/113491/

Knowing that running random command on nodes will be probably most usable
type of
orchestration extension, i want to discuss our solution for this problem.

Plugin writer will need to do two things:

1. Provide custom task.yaml (i am using /etc/puppet/tasks, but this is
completely configurable,
we just need to reach agreement)

  /etc/puppet/tasks/echo/task.yaml

  with next content:

   type: exec
   cmd: echo 1

2. Provide control plane with orchestration metadata

/etc/fuel/tasks/echo_task.yaml

controller:
 -
  task: echo
  description: Simple echo for you
  priority: 1000
compute:
-
  task: echo
  description: Simple echo for you
  priority: 1000

This is done in order to separate concerns of orchestration logic and tasks.

From plugin writer perspective it is far more usable to provide exact
command in orchestration metadata itself, like:

/etc/fuel/tasks/echo_task.yaml

controller:
 -
  task: echo
  description: Simple echo for you
  priority: 1000
  cmd: echo 1
  type: exec

compute:
-
 task: echo
  description: Simple echo for you
  priority: 1000
  cmd: echo 1
  type: exec

I would prefer to stick to the first, because there is benefits of using
one interface between all tasks executors (puppet, exec, maybe chef), which
will improve debuging and development process.

So my question is first - good enough? Or second is essential type of
plugin to support?

If you want additional implementation details check:
https://review.openstack.org/#/c/118311/
https://review.openstack.org/#/c/113226/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Using host networking for docker containers

2014-08-09 Thread Dmitriy Shulyak
Hi team,

I want to discuss benefits of using host networking [1] for docker
containers, on master node.

This feature was added in docker 0.11 and basicly means - reuse host
networking stack, without
creating separate namespace for each container.

In my opinion it will result in much more stable install/upgrade of master
node.

1. There will be no need for dhcrelay/dhcrelay_monitor on host
2. No dnat port forwarding
3. Performance improvement for pxe boot ???

Is there any real benefits of using separate namespaces in security terms?

To implement this we will need:

1. Update docker to recent version 0.12/1.x, we will do it anyway, yes?
2. Run docker containers with --net=host

Ofcourse it will require running containers in privileged mode, but afaik
we are already doing this for other reasons.

So, what do you think?

[1] https://github.com/docker/docker/issues/2012
[2] https://docs.docker.com/articles/networking/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Upgrade of netchecker/mcagents during OpenStack patching procedure

2014-07-24 Thread Dmitriy Shulyak
Hi,

1. There is several incompatibilities with network checker in 5.0 and 5.1,
mainly caused by introduction of multicast verification.
Issue with additional release information, which easy to resolve by
excluding multicast on 5.0 environment
[1] https://bugs.launchpad.net/fuel/+bug/1342814
Issue with running network verification on old boostrap and newly created
5.1 environment
[2] https://bugs.launchpad.net/fuel/+bug/1348130
There is no easy way to fix it.. so i will probably disable multicast for
now

Other issue is about bugs that whas fixed in mcagents, network checker and
nailgun agent

2. It can be done with some hacks in ostf



On Thu, Jul 24, 2014 at 1:58 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 I want to discuss here several bugs

 1. Do we want to upgrade (deliver new packages) for netchecker and
 mcagents? [1]

 If yes, then we have to add a list of packages which are
 installed on provisioning stage (e.g. netchecker/mcagent/something else)
 in puppet, to run patching for this packages.

 2. After master node upgrade (from 5.0 to 5.0.1/5.1) Murana test is still
 disabled [2]
 for old clusters, despite the fact that the test was fixed in 5.0.1/5.1
 and works on
 clusters which were created after upgrade.

 I'm not an expert in OSTF, are there any suggestions how to fix it? Who we
 can
 assign this bug to?

 [1] https://bugs.launchpad.net/fuel/+bug/1343139
 [2] https://bugs.launchpad.net/fuel/+bug/1337823

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [OSTF] OSTF stops working after password is changed

2014-06-25 Thread Dmitriy Shulyak
Looks like we will stick to #2 option, as most reliable one.

- we have no way to know that openrc is changed, even if some scripts
relies on it - ostf should not fail with auth error
- we can create ostf user in post-deployment stage, but i heard that some
ceilometer tests relied on admin user, also
  operator may not want to create additional user, for some reasons

So, everybody is ok with additional fields on HealthCheck tab?




On Fri, Jun 20, 2014 at 8:17 PM, Andrew Woodward xar...@gmail.com wrote:

 The openrc file has to be up to date for some of the HA scripts to
 work, we could just source that.

 On Fri, Jun 20, 2014 at 12:12 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  +1 for #2.
 
  ~Sergii
 
 
  On Fri, Jun 20, 2014 at 1:21 AM, Andrey Danin ada...@mirantis.com
 wrote:
 
  +1 to Mike. Let the user provide actual credentials and use them in
 place.
 
 
  On Fri, Jun 20, 2014 at 2:01 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
 
  I'm in favor of #2. I think users might not want to have their password
  stored in Fuel Master node.
  And if so, then it actually means we should not save it when user
  provides it on HealthCheck tab.
 
 
  On Thu, Jun 19, 2014 at 8:05 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  Hi folks,
 
  We have a bug which prevents OSTF from working if user changes a
  password which was using for the initial installation. I skimmed
 through the
  comments and it seems there are 2 viable options:
 
  Create a separate user just for OSTF during OpenStack installation
  Provide a field for a password in UI so user could provide actual
  password in case it was changed
 
  What do you guys think? Which options is better?
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Andrey Danin
  ada...@mirantis.com
  skype: gcon.monolake
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [OSTF] OSTF stops working after password is changed

2014-06-25 Thread Dmitriy Shulyak
It is possible to change everything so username, password and tenant fields

Also this way we will be able to run tests not only as admin user


On Wed, Jun 25, 2014 at 12:29 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 Dmitry,

 Fields or field? Do we need to provide password only or other credentials
 are needed?


 2014-06-25 13:02 GMT+04:00 Dmitriy Shulyak dshul...@mirantis.com:

 Looks like we will stick to #2 option, as most reliable one.

 - we have no way to know that openrc is changed, even if some scripts
 relies on it - ostf should not fail with auth error
 - we can create ostf user in post-deployment stage, but i heard that some
 ceilometer tests relied on admin user, also
   operator may not want to create additional user, for some reasons

 So, everybody is ok with additional fields on HealthCheck tab?




 On Fri, Jun 20, 2014 at 8:17 PM, Andrew Woodward xar...@gmail.com
 wrote:

 The openrc file has to be up to date for some of the HA scripts to
 work, we could just source that.

 On Fri, Jun 20, 2014 at 12:12 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  +1 for #2.
 
  ~Sergii
 
 
  On Fri, Jun 20, 2014 at 1:21 AM, Andrey Danin ada...@mirantis.com
 wrote:
 
  +1 to Mike. Let the user provide actual credentials and use them in
 place.
 
 
  On Fri, Jun 20, 2014 at 2:01 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
 
  I'm in favor of #2. I think users might not want to have their
 password
  stored in Fuel Master node.
  And if so, then it actually means we should not save it when user
  provides it on HealthCheck tab.
 
 
  On Thu, Jun 19, 2014 at 8:05 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  Hi folks,
 
  We have a bug which prevents OSTF from working if user changes a
  password which was using for the initial installation. I skimmed
 through the
  comments and it seems there are 2 viable options:
 
  Create a separate user just for OSTF during OpenStack installation
  Provide a field for a password in UI so user could provide actual
  password in case it was changed
 
  What do you guys think? Which options is better?
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Andrey Danin
  ada...@mirantis.com
  skype: gcon.monolake
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Software Engineer,
 Mirantis, Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Support for plugins in fuel client

2014-06-24 Thread Dmitriy Shulyak
As i mentioned cliff uses similar approach, extending app by means of entry
points, and written by same author.
So i think stevedore will be used in cliff, or maybe already used in newer
versions.
But apart of stevedore-like dynamic extensions - cliff provides modular
layers for cli app, it is kindof framework for wrtiting
cli applications.


On Tue, Jun 24, 2014 at 11:15 PM, Andrey Danin ada...@mirantis.com wrote:

 Why not to use stevedore?


 On Wed, Jun 18, 2014 at 1:42 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 Hi guys,

 Actually, I'm not a fun of cliff, but I think it's a good solution to use
 it in our fuel client.

 Here some pros:

 * pluggable design: we can encapsulate entire command logic in separate
 plugin file
 * builtin output formatters: we no need to implement various formatters
 to represent received data
 * interactive mode: cliff makes possible to provide a shell mode, just
 like psql do

 Well, I vote to use cliff inside fuel client. Yeah, I know, we need to
 rewrite a lot of code, but we
 can do it step-by-step.

 - Igor




 On Wed, Jun 18, 2014 at 9:14 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I am wondering what our story/vision for plugins in fuel client [1]?

 We can benefit from using cliff [2] as framework for fuel cli, apart
 from common code
 for building cli applications on top of argparse, it provides nice
 feature that allows to
 dynamicly add actions by means of entry points (stevedore-like).

 So we will be able to add new actions for fuel client simply by
 installing separate packages with correct entry points.

 Afaik stevedore is not used there, but i think it will be - cause of
 same author and maintainer.

 Do we need this? Maybe there is other options?

 Thanks

 [1] https://github.com/stackforge/fuel-web/tree/master/fuelclient
 [2]  https://github.com/openstack/cliff

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Support for plugins in fuel client

2014-06-18 Thread Dmitriy Shulyak
Hi folks,

I am wondering what our story/vision for plugins in fuel client [1]?

We can benefit from using cliff [2] as framework for fuel cli, apart from
common code
for building cli applications on top of argparse, it provides nice feature
that allows to
dynamicly add actions by means of entry points (stevedore-like).

So we will be able to add new actions for fuel client simply by installing
separate packages with correct entry points.

Afaik stevedore is not used there, but i think it will be - cause of same
author and maintainer.

Do we need this? Maybe there is other options?

Thanks

[1] https://github.com/stackforge/fuel-web/tree/master/fuelclient
[2]  https://github.com/openstack/cliff
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-11 Thread Dmitriy Shulyak
Yes, in my opinion salt can completely replace astute/mcollective/rabbitmq.
Listen and respond to the events generated by nailgun, or any other plugin
- not a problem.
There is already some kind of plugin for salt that adds ability to execute
puppet on minions (agents) [1]

[1]
http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.puppet.html


On Tue, Jun 10, 2014 at 4:06 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Interesting stuff.
 Do you think that we can get rid of Astute at some point being purely
 replaced by Salt?
 And listening for the commands from Fuel?

 Can you please clarify, does the suggested approach implies that we can
 have both puppet  SaltStack? Even if you ever switch to anything
 different, it is important to provide a smooth and step-by-step way for it.



 On Mon, Jun 9, 2014 at 6:05 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I know that sometime ago saltstack was evaluated to be used as
 orchestrator in fuel, so I've prepared some initial specification, that
 addresses basic points of integration, and general requirements for
 orchestrator.

 In my opinion saltstack perfectly fits our needs, and we can benefit from
 using mature orchestrator, that has its own community. I still dont have
 all the answers, but , anyway, i would like to ask all of you to start a
 review for specification


 https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

 I will place it in fuel-docs repo as soon as specification will be full
 enough to start POC, or if you think that spec should placed there as is, i
 can do it now

 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-11 Thread Dmitriy Shulyak
Actually i am proposing salt as alternative, the main reason - salt is
mature, feature full orchestration solution, that is well adopted even by
our internal teams


On Wed, Jun 11, 2014 at 12:37 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 As far as I remember we wanted to replace Astute with Mistral [1], do we
 really want to have some intermediate steps (I mean salt) to do it?

 [1] https://wiki.openstack.org/wiki/Mistral


 On Wed, Jun 11, 2014 at 10:38 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Yes, in my opinion salt can completely replace
 astute/mcollective/rabbitmq.
 Listen and respond to the events generated by nailgun, or any other
 plugin - not a problem.
 There is already some kind of plugin for salt that adds ability to
 execute puppet on minions (agents) [1]

 [1]
 http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.puppet.html


 On Tue, Jun 10, 2014 at 4:06 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Interesting stuff.
 Do you think that we can get rid of Astute at some point being purely
 replaced by Salt?
 And listening for the commands from Fuel?

 Can you please clarify, does the suggested approach implies that we can
 have both puppet  SaltStack? Even if you ever switch to anything
 different, it is important to provide a smooth and step-by-step way for it.



 On Mon, Jun 9, 2014 at 6:05 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I know that sometime ago saltstack was evaluated to be used as
 orchestrator in fuel, so I've prepared some initial specification, that
 addresses basic points of integration, and general requirements for
 orchestrator.

 In my opinion saltstack perfectly fits our needs, and we can benefit
 from using mature orchestrator, that has its own community. I still dont
 have all the answers, but , anyway, i would like to ask all of you to start
 a review for specification


 https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

 I will place it in fuel-docs repo as soon as specification will be full
 enough to start POC, or if you think that spec should placed there as is, i
 can do it now

 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-11 Thread Dmitriy Shulyak
well, i dont have any comparison chart, i can work on one based on
requirements i've provided in initial letter, but:
i like ansible, but it is agentless, and it wont fit well in our current
model of communication between nailgun and orchestrator
cloudify - java based application, even if it is pluggable with other
language bindings - we will benefit from application in python
salt is been around for 3-4 years, and simply compare github graphs, it one
of the most used and active projects in python community

https://github.com/stackforge/mistral/graphs/contributors
https://github.com/saltstack/salt/graphs/contributors


On Wed, Jun 11, 2014 at 1:04 PM, Sergii Golovatiuk sgolovat...@mirantis.com
 wrote:

 Hi,

 There are many mature orchestration applications (Salt, Ansible, Cloudify,
 Mistral). Is there any comparison chart? That would be nice to compare them
 to understand the maturity level. Thanks

 ~Sergii


 On Wed, Jun 11, 2014 at 12:48 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Actually i am proposing salt as alternative, the main reason - salt is
 mature, feature full orchestration solution, that is well adopted even by
 our internal teams


 On Wed, Jun 11, 2014 at 12:37 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 As far as I remember we wanted to replace Astute with Mistral [1], do we
 really want to have some intermediate steps (I mean salt) to do it?

 [1] https://wiki.openstack.org/wiki/Mistral


 On Wed, Jun 11, 2014 at 10:38 AM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Yes, in my opinion salt can completely replace
 astute/mcollective/rabbitmq.
 Listen and respond to the events generated by nailgun, or any other
 plugin - not a problem.
 There is already some kind of plugin for salt that adds ability to
 execute puppet on minions (agents) [1]

 [1]
 http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.puppet.html


 On Tue, Jun 10, 2014 at 4:06 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Interesting stuff.
 Do you think that we can get rid of Astute at some point being purely
 replaced by Salt?
 And listening for the commands from Fuel?

 Can you please clarify, does the suggested approach implies that we
 can have both puppet  SaltStack? Even if you ever switch to anything
 different, it is important to provide a smooth and step-by-step way for 
 it.



 On Mon, Jun 9, 2014 at 6:05 AM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Hi folks,

 I know that sometime ago saltstack was evaluated to be used as
 orchestrator in fuel, so I've prepared some initial specification, that
 addresses basic points of integration, and general requirements for
 orchestrator.

 In my opinion saltstack perfectly fits our needs, and we can benefit
 from using mature orchestrator, that has its own community. I still dont
 have all the answers, but , anyway, i would like to ask all of you to 
 start
 a review for specification


 https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

 I will place it in fuel-docs repo as soon as specification will be
 full enough to start POC, or if you think that spec should placed there 
 as
 is, i can do it now

 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-09 Thread Dmitriy Shulyak
Hi folks,

I know that sometime ago saltstack was evaluated to be used as orchestrator
in fuel, so I've prepared some initial specification, that addresses basic
points of integration, and general requirements for orchestrator.

In my opinion saltstack perfectly fits our needs, and we can benefit from
using mature orchestrator, that has its own community. I still dont have
all the answers, but , anyway, i would like to ask all of you to start a
review for specification

https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

I will place it in fuel-docs repo as soon as specification will be full
enough to start POC, or if you think that spec should placed there as is, i
can do it now

Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Dmitriy Shulyak
Created spec https://review.openstack.org/#/c/94907/

I think it is WIP still, but would be nice to hear some comments/opinions


On Thu, May 22, 2014 at 1:59 AM, Robert Collins
robe...@robertcollins.netwrote:

 On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - RD - Corvallis)
 mark.m.mil...@hp.com wrote:
  We are considering the following connection chain:
 
 - HAProxy   -   stunnel -OS services bound
 to 127.0.0.1
   Virtual IP server IP
 localhost 127.0.0.1
   secure  SSL terminate unsecure

 Interestingly, and separately, HAProxy can do SSL termination now, so
 we might want to consider just using HAProxy for that.

  In this chain none of the ports need to changed. One of the major issues
 I have come across is the hard coding of the Keystone ports in the
 OpenStack service's configuration files. With the above connection scheme
 none of the ports need to change.

 But we do need to have HAProxy not wildcard bind, as Greg points out,
 and to make OS services bind to 127.0.0.1 as Jan pointed out.

 I suspect we need to put this through the specs process (which ops
 teams are starting to watch) to ensure we get enough input.

 I'd love to see:
  - SSL by default
  - A setup we can document in the ops guide / HA openstack install
 guide - e.g we don't need to be doing it a third different way (or we
 can update the existing docs if what we converge on is better).
  - Only SSL enabled endpoints accessible from outside the machine (so
 python processes bound to localhost as a security feature).

 Eventually we may need to scale traffic beyond one HAProxy, at which
 point we'll need to bring something altogether more sophisticated in -
 lets design that when we need it.
 Sooner than that we're likely going to need to scale load beyond one
 control plane server at which point the HAProxy VIP either needs to be
 distributed (so active-active load receiving) or we need to go
 user - haproxy (VIP) - SSL endpoint (on any control plane node) -
 localhost bound service.

 HTH,
 Rob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-16 Thread Dmitriy Shulyak


 HA-Proxy version 1.4.24 2013/06/17 What was the reason this approach
  was dropped?


 IIRC the major reason was that having 2 services on same port (but
 different interface) would be too confusing for anyone who is not aware
 of this fact.


 Major part of documentation for haproxy with vip setup is done with
duplicated ports.
From my experience lb solutions have been made with load balancer sitting
on VIRTUAL_IP:STANDART_PORT and/or PUBLIC_VIRTUAL_IP:STANDART_PORT.

Maybe this is not so big issue? It will be much easier to start with such
deployment configuration

Dmitry
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Haproxy configuration options

2014-05-12 Thread Dmitriy Shulyak
Adding haproxy (or keepalived with lvs for load balancing) will require
binding haproxy and openstack services on different sockets.
Afaik there is 3 approaches that tripleo could go with.

Consider configuration with 2 controllers:

haproxy:
nodes:
-   name: controller0
ip: 192.0.2.20
-   name: controller1
ip: 192.0.2.21

1. Binding haproxy on virtual ip and standard ports

haproxy:
services:
-   name: horizon
proxy_ip: 192.0.2.22 (virtual ip)
port: 80
proxy_port: 80
-   name: neutron
proxy_ip: 192.0.2.22 (virtual ip)
proxy_port: 9696
port: 9696

Pros:
- No additional modifications in elements is required
HA-Proxy version 1.4.24 2013/06/17
What was the reason this approach was dropped?

2. Haproxy listening on standard ports, services on non-standard

haproxy:
services:
-   name: horizon
proxy_ip: 192.0.2.22 (virtual ip)
port: 8080
proxy_port: 80
-   name: neutron
proxy_ip: 192.0.2.22 (virtual ip)
proxy_port: 9696
port: 9797

Pros:
- No changes will be required to init-keystone part of workflow
- Proxied services will be accessible on accustomed ports
- No changes to configs where services ports need to be hardcoded, for
example in nova.conf https://review.openstack.org/#/c/92550/

Cons:
- Config files should be changed to add possibility of ports configuration

3. haproxy on non-standard ports, with services on standard

haproxy:
services:
-   name: horizon
proxy_ip: 192.0.2.22 (virtual ip)
port: 8080
proxy_port: 80
-   name: neutron
proxy_ip: 192.0.2.22 (virtual ip)
proxy_port: 9797
port: 9696

Notice that i changed only port for neutron, main endpoint for horizon
should listen on default http or https ports.

Basicly it is opposite to 2 approach. I would prefer to go with 2, cause it
requires only minor refactoring.

Thoughts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel-dev][fuel-ostf] Extending networks diagnostic toolkit

2014-04-07 Thread Dmitriy Shulyak
Hi,

There is number of additional network verifications that can improve
troubleshooting experience or even cluster perfomance, like:

1. multicast group verification for corosync messaging
2. network connectivity with jumbo packets
3. l3 connectivity verification
4. some fencing verification
5. allocated ip verification
https://bugs.launchpad.net/fuel/+bug/1275641
6. measure network perfomance with iperf

Adding this stuff on fuel-web network tab will significantly worsen UX,
also it is not friendly enough to extend current model with additional
verifications.

Whole approach looks like networking health check for deployment, so in my
opinion it should be done as separate tab similar to ostf health check.

fuel-ostf already has necessery db and rest-api code to support such
extensions, and with some work this can be used as diagnostic tool not only
for fuel, but in tripleo as well.

In my opinion this feature should be splited in two main parts:

PART 1 - new plugin-executor for ostf, ui tab in fuel-web, extending this
plugin with existing verifications

1. for now ostf has one plugin-executor - this plugin uses nose for
running tests, add new executor that will be named smth like distributed,
astute still will perform role of orchestartor

2. adding new reporter to astute that will publish messages to ostf
queue

3. add ostf amqp receiver

4. extend current plugin with verifications listed above

After this part of refactoring it should be possible to support rapid
extension of distributed cluster diagnostic.

PART 2 - make integration with fuel plugable, it means:

1. remove proxy dependency from ostf, it can be done with socks
protocol that provides http proxy over ssh (it is supported by openssh
server)

2. make integration with nailgun plugable

3. replace astute/mcollective with custom agent or some community
solution


I will appreciate comments or suggestions, so dont hesitate to share your
thoughts
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Neutron] Tripleo Neutron

2014-04-07 Thread Dmitriy Shulyak
Hi Marios, thanks for raising this.

There is in progress blueprint that should address some issues with neutron
ha deployment -
https://blueprints.launchpad.net/neutron/+spec/l3-high-availability.

Right now neutron-dhcp agent can be configured as active/active.

But l3-agent and metadata-agent still should be active/passive,
afaik the best approach would be to use corosync+pacemaker, that is also
stated in official documentation
http://docs.openstack.org/high-availability-guide/content/ch-network.html.

What other choices, except  corosync+pacemaker, do we have for neutron ha?

Thanks



On Mon, Apr 7, 2014 at 11:18 AM, mar...@redhat.com mandr...@redhat.comwrote:

 Hello Tripleo/Neutron:

 I've recently found some cycles to look into Neutron. Mostly because
 networking rocks, but also so we can perhaps better address Neutron
 related issues/needs down the line. I thought it may be good to ask the
 wider team if there are others that are also interested in
 NeutronTripleo. We could form a loose focus group to discuss blueprints
 and review each other's code/chase up with cores. My search may have
 missed earlier discussions in openstack-dev[Tripleo][Neutron] and
 Tripleo bluprints so my apologies if this has already been started
 somewhere. If any of the above is of interest then:

 *is the following list sane - does it make sense to pick these off or
 are these 'nice to haves' but not of immediate concern? Even just
 validating, prioritizing and recording concerns could be worthwhile for
 example?
 * are you interested in discussing any of the following further and
 perhaps investigating and/or helping with blueprints where/if necessary?

 Right now I have:

 [Undercloud]:

 1. Define a neutron node (tripleo-image-elements/disk-image-builder) and
 make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This
 comes under by lifeless blueprint at

 https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies

 2. HA the neutron node. For each neutron services/agents of interest
 (neutron-dhcp-agent, neutron-l3-agent, neutron-lbaas-agent ... ) fix any
 issues with running these in HA - perhaps there are none \o/? Useful
 whether using a dedicated Neutron node or just for HA the
 undercloud-control node

 3. Does it play with Ironic OK? I know there were some issues with
 Ironic and Neutron DHCP, though I think this has now been addressed.
 Other known/unkown bugs/issues with Ironic/Neutron - the baremetal
 driver will be deprecated at some point...

 4. Subnetting. Right now the undercloud uses a single subnet. Does it
 make sense to have multiple subnets here - one point I've heard is for
 segregation of your undercloud nodes (i.e. 1 broadcast domain).

 5. Security. Are we at least using Neutron as we should be in the
 Undercloud, security-groups, firewall rules etc?

 [Overcloud]:

 1. Configuration. In the overcloud it's just Neutron. So one concern
 is which and how to expose neutron configuration options via Tuskar-UI.
 We would pass these through the deployment heat-template for definition
 of Neutron plugin-specific .conf files (like dnsmasq-neutron.conf) for
 example or initial definition of tenant subnets and router(s) for access
 to external networks.

 2. 3. ???


 thanks! marios

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev