Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Dmitriy Shulyak
  But why to add another interface when there is one already (rest api)?

 I'm ok if we decide to use REST API, but of course there is a problem which
 we should solve, like versioning, which is much harder to support, than
 versioning
 in core-serializers. Also do you have any ideas how it can be implemented?


We need to think about deployment serializers not as part of nailgun (fuel
data inventory), but - part of another layer which uses nailgun api to
generate deployment information. Lets take ansible for example, and dynamic
inventory feature [1].
Nailgun API can be used inside of ansible dynamic inventory to generate
config that will be consumed by ansible during deployment.

Such approach will have several benefits:
- cleaner interface (ability to use ansible as main interface to control
deployment and all its features)
- deployment configuration will be tightly coupled with deployment code
- no limitation on what sources to use for configuration, and how to
compute additional values from requested data

I want to emphasize that i am not considering ansible as solution for fuel,
it serves only as example of architecture.


 You run some code which get the information from api on the master node and
 then sets the information in tasks? Or you are going to run this code on
 OpenStack
 nodes? As you mentioned in case of tokens, you should get the token right
 before
 you really need it, because of expiring problem, but in this case you don't
 need any serializers, get required token right in the task.


I think all information should be fetched before deployment.



 What is your opinion about serializing additional information in plugins
 code? How it can be done, without exposing db schema?

 With exposing the data in more abstract way the way it's done right now
 for the current deployment logic.


I mean what if plugin will want to generate additional data, like -
https://review.openstack.org/#/c/150782/? Schema will be still exposed?

[1] http://docs.ansible.com/intro_dynamic_inventory.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Evgeniy L
Hi Dmitry,

I've read about inventories and I'm not sure if it's what we really need,
inventory provides you a way to have some kind of nodes discovery
mechanism, but what we need is to get some abstract data and convert
the data to more tasks friendly format.

In another thread I've mentioned Variables [1] in Ansible, probably it
fits more than inventory from architecture point of view.

With this functionality plugin will be able to get required information from
Nailgun via REST API and pass the information into specific task.

But it's not a way to go with the core deployment. I would like to remind
you what we had two years ago, we had Nailgun which passed the information
in format A to Orchestrator (Astute), than Orchestrator converted the
information
in second format B. It was horrible from debugging point of view, it's
always
hard when you have to go in several places to figure out what you get
as result. Your have pretty similar design suggestion, which is dividing
searilization logic between Nailgun and some another layer in tasks
scripts.

Thanks,

[1] http://docs.ansible.com/playbooks_variables.html#registered-variables

On Mon, Feb 2, 2015 at 5:05 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


  But why to add another interface when there is one already (rest api)?

 I'm ok if we decide to use REST API, but of course there is a problem
 which
 we should solve, like versioning, which is much harder to support, than
 versioning
 in core-serializers. Also do you have any ideas how it can be implemented?


 We need to think about deployment serializers not as part of nailgun (fuel
 data inventory), but - part of another layer which uses nailgun api to
 generate deployment information. Lets take ansible for example, and
 dynamic inventory feature [1].
 Nailgun API can be used inside of ansible dynamic inventory to generate
 config that will be consumed by ansible during deployment.

 Such approach will have several benefits:
 - cleaner interface (ability to use ansible as main interface to control
 deployment and all its features)
 - deployment configuration will be tightly coupled with deployment code
 - no limitation on what sources to use for configuration, and how to
 compute additional values from requested data

 I want to emphasize that i am not considering ansible as solution for
 fuel, it serves only as example of architecture.


 You run some code which get the information from api on the master node
 and
 then sets the information in tasks? Or you are going to run this code on
 OpenStack
 nodes? As you mentioned in case of tokens, you should get the token right
 before
 you really need it, because of expiring problem, but in this case you
 don't
 need any serializers, get required token right in the task.


 I think all information should be fetched before deployment.



  What is your opinion about serializing additional information in
 plugins code? How it can be done, without exposing db schema?

 With exposing the data in more abstract way the way it's done right now
 for the current deployment logic.


 I mean what if plugin will want to generate additional data, like -
 https://review.openstack.org/#/c/150782/? Schema will be still exposed?

 [1] http://docs.ansible.com/intro_dynamic_inventory.html


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-29 Thread Evgeniy L
Dmitry,

 But why to add another interface when there is one already (rest api)?

I'm ok if we decide to use REST API, but of course there is a problem which
we should solve, like versioning, which is much harder to support, than
versioning
in core-serializers. Also do you have any ideas how it can be implemented?
You run some code which get the information from api on the master node and
then sets the information in tasks? Or you are going to run this code on
OpenStack
nodes? As you mentioned in case of tokens, you should get the token right
before
you really need it, because of expiring problem, but in this case you don't
need any serializers, get required token right in the task.

 What is your opinion about serializing additional information in plugins
code? How it can be done, without exposing db schema?

With exposing the data in more abstract way the way it's done right now
for the current deployment logic.

Thanks,

On Thu, Jan 29, 2015 at 12:06 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


 1. as I mentioned above, we should have an interface, and if interface
 doesn't
 provide required information, you will have to fix it in two places,
 in Nailgun and in external-serializers, instead of a single place
 i.e. in Nailgun,
 another thing if astute.yaml is a bad interface and we should provide
 another
 versioned interface, or add more data into deployment serializer.

 But why to add another interface when there is one already (rest api)? And
 plugin developer
 may query whatever he want (detailed information about volumes,
 interfaces, master node settings).
 It is most full source of information in fuel and it is already needs to
 be protected from incompatible changes.

 If our API will be not enough for general use - ofcourse we will need to
 fix it, but i dont quite understand what do
 you mean by - fix it in two places. API provides general information that
 can be consumed by serializers (or any other service/human actually),
 and if there is some issues with that information - API should be fixed.
 Serializers expects that information in specific format and makes
 additional transformation or computation based on that info.

 What is your opinion about serializing additional information in plugins
 code? How it can be done, without exposing db schema?

 2. it can be handled in python or any other code (which can be wrapped
 into tasks),
 why should we implement here another entity (a.k.a external
 serializers)?

 Yep, i guess this is true, i thought that we may not want to deliver
 credentials to the target nodes, and only token that can be used
 for limited time, but...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-29 Thread Vladimir Kuklin
Guys

I would just like to point out that we will certainly need to consume
resources from 3rd party sources also. We also want to remove any specific
data manipulation from puppet code. Please, consider these use cases also.

On Thu, Jan 29, 2015 at 12:06 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


 1. as I mentioned above, we should have an interface, and if interface
 doesn't
 provide required information, you will have to fix it in two places,
 in Nailgun and in external-serializers, instead of a single place
 i.e. in Nailgun,
 another thing if astute.yaml is a bad interface and we should provide
 another
 versioned interface, or add more data into deployment serializer.

 But why to add another interface when there is one already (rest api)? And
 plugin developer
 may query whatever he want (detailed information about volumes,
 interfaces, master node settings).
 It is most full source of information in fuel and it is already needs to
 be protected from incompatible changes.

 If our API will be not enough for general use - ofcourse we will need to
 fix it, but i dont quite understand what do
 you mean by - fix it in two places. API provides general information that
 can be consumed by serializers (or any other service/human actually),
 and if there is some issues with that information - API should be fixed.
 Serializers expects that information in specific format and makes
 additional transformation or computation based on that info.

 What is your opinion about serializing additional information in plugins
 code? How it can be done, without exposing db schema?

 2. it can be handled in python or any other code (which can be wrapped
 into tasks),
 why should we implement here another entity (a.k.a external
 serializers)?

 Yep, i guess this is true, i thought that we may not want to deliver
 credentials to the target nodes, and only token that can be used
 for limited time, but...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Dmitriy Shulyak


 It's not clear what problem you are going to solve with putting serializers
 alongside with deployment scripts/tasks.

I see two possible uses for specific serializer for tasks:
1. Compute additional information for deployment based not only on what is
present in astute.yaml
2. Request information from external sources based on values stored in fuel
inventory (like some token based on credentials)

For sure there is no way for this serializers to have access to the
 database,
 because with each release there will be a huge probability to get this
 serializers broken for example because of changes in database schema.
 As Dmitry mentioned in this case solution is to create another layer
 which provides stable external interface to the data.
 We already to have this interface where we support versioning and backward
 compatibility, in terms of deployment script it's astute.yaml file.

That is the problem, it is impossible to cover everything by astute.yaml.
We need to think on a way to present all data available in nailgun as
deployment configuration
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Evgeniy L
Hi Vladimir,

It's not clear what problem you are going to solve with putting serializers
alongside with deployment scripts/tasks.
For sure there is no way for this serializers to have access to the
database,
because with each release there will be a huge probability to get this
serializers broken for example because of changes in database schema.
As Dmitry mentioned in this case solution is to create another layer
which provides stable external interface to the data.
We already to have this interface where we support versioning and backward
compatibility, in terms of deployment script it's astute.yaml file.
So we can add python code which gets this Hash/Dict and retrieves all of the
data which are required for specific task, it means that if you want to pass
some new data, you have to fix the code in two places, in Nailgun, and in
tasks specific serializers. Looks like increasing of complexity and over
engineering.

Thanks,

On Tue, Jan 27, 2015 at 11:47 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Dmitry

 This is an interesting topic. As per our discussions earlier, I suggest
 that in the future we move to different serializers for each granule of our
 deployment, so that we do not need to drag a lot of senseless data into
 particular task being executed. Say, we have a fencing task, which has a
 serializer module written in python. This module is imported by Nailgun and
 what it actually does, it executes specific Nailgun core methods that
 access database or other sources of information and retrieve data in the
 way this task wants it instead of adjusting the task to the only
 'astute.yaml'.

 On Thu, Jan 22, 2015 at 8:59 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 The problem with merging is usually it's not clear how system performs
 merging.
 For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
 3}]}, and I want
 {'list': [{'k': 4}]} to be merged, what system should do? Replace the
 list or add {'k': 4}?
 Both cases should be covered.

 Most of the users don't remember all of the keys, usually user gets the
 defaults, and
 changes some values in place, in this case we should ask user to remove
 the rest
 of the fields.

 The only solution which I see is to separate the data from the graph, not
 to send
 this information to user.

 Thanks,

 On Thu, Jan 22, 2015 at 5:18 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi guys,

 I want to discuss the way we are working with deployment configuration
 that were redefined for cluster.

 In case it was redefined by API - we are using that information instead
 of generated.
 With one exception, we will generate new repo sources and path to
 manifest if we are using update (patching feature in 6.0).

 Starting from 6.1 this configuration will be populated by tasks, which
 is a part of granular deployment
 workflow and replacement of configuration will lead to inability to use
 partial graph execution API.
 Ofcourse it is possible to hack around and make it work, but imo we need
 generic solution.

 Next problem - if user will upload replaced information, changes on
 cluster attributes, or networks, wont be reflected in deployment anymore
 and it constantly leads to problems for deployment engineers that are using
 fuel.

 What if user want to add data, and use generated of networks,
 attributes, etc?
 - it may be required as a part of manual plugin installation (ha_fencing
 requires a lot of configuration to be added into astute.yaml),
 - or you need to substitute networking data, e.g add specific parameters
 for linux bridges

 So given all this, i think that we should not substitute all
 information, but only part that is present in
 redefined info, and if there is additional parameters they will be
 simply merged into generated info


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack 

Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Dmitriy Shulyak
 1. as I mentioned above, we should have an interface, and if interface
 doesn't
 provide required information, you will have to fix it in two places,
 in Nailgun and in external-serializers, instead of a single place i.e.
 in Nailgun,
 another thing if astute.yaml is a bad interface and we should provide
 another
 versioned interface, or add more data into deployment serializer.

But why to add another interface when there is one already (rest api)? And
plugin developer
may query whatever he want (detailed information about volumes, interfaces,
master node settings).
It is most full source of information in fuel and it is already needs to be
protected from incompatible changes.

If our API will be not enough for general use - ofcourse we will need to
fix it, but i dont quite understand what do
you mean by - fix it in two places. API provides general information that
can be consumed by serializers (or any other service/human actually),
and if there is some issues with that information - API should be fixed.
Serializers expects that information in specific format and makes
additional transformation or computation based on that info.

What is your opinion about serializing additional information in plugins
code? How it can be done, without exposing db schema?

2. it can be handled in python or any other code (which can be wrapped into
 tasks),
 why should we implement here another entity (a.k.a external
 serializers)?

Yep, i guess this is true, i thought that we may not want to deliver
credentials to the target nodes, and only token that can be used
for limited time, but...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Evgeniy L
Hi Dmitry!

1. as I mentioned above, we should have an interface, and if interface
doesn't
provide required information, you will have to fix it in two places,
in Nailgun and in external-serializers, instead of a single place i.e.
in Nailgun,
another thing if astute.yaml is a bad interface and we should provide
another
versioned interface, or add more data into deployment serializer.
2. it can be handled in python or any other code (which can be wrapped into
tasks),
why should we implement here another entity (a.k.a external
serializers)?

Thanks,

On Wed, Jan 28, 2015 at 2:45 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


 It's not clear what problem you are going to solve with putting
 serializers
 alongside with deployment scripts/tasks.

 I see two possible uses for specific serializer for tasks:
 1. Compute additional information for deployment based not only on what is
 present in astute.yaml
 2. Request information from external sources based on values stored in
 fuel inventory (like some token based on credentials)

 For sure there is no way for this serializers to have access to the
 database,
 because with each release there will be a huge probability to get this
 serializers broken for example because of changes in database schema.
 As Dmitry mentioned in this case solution is to create another layer
 which provides stable external interface to the data.
 We already to have this interface where we support versioning and backward
 compatibility, in terms of deployment script it's astute.yaml file.

 That is the problem, it is impossible to cover everything by astute.yaml.
 We need to think on a way to present all data available in nailgun as
 deployment configuration




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Vladimir Kuklin
Dmitry

This is an interesting topic. As per our discussions earlier, I suggest
that in the future we move to different serializers for each granule of our
deployment, so that we do not need to drag a lot of senseless data into
particular task being executed. Say, we have a fencing task, which has a
serializer module written in python. This module is imported by Nailgun and
what it actually does, it executes specific Nailgun core methods that
access database or other sources of information and retrieve data in the
way this task wants it instead of adjusting the task to the only
'astute.yaml'.

On Thu, Jan 22, 2015 at 8:59 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 The problem with merging is usually it's not clear how system performs
 merging.
 For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
 3}]}, and I want
 {'list': [{'k': 4}]} to be merged, what system should do? Replace the list
 or add {'k': 4}?
 Both cases should be covered.

 Most of the users don't remember all of the keys, usually user gets the
 defaults, and
 changes some values in place, in this case we should ask user to remove
 the rest
 of the fields.

 The only solution which I see is to separate the data from the graph, not
 to send
 this information to user.

 Thanks,

 On Thu, Jan 22, 2015 at 5:18 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi guys,

 I want to discuss the way we are working with deployment configuration
 that were redefined for cluster.

 In case it was redefined by API - we are using that information instead
 of generated.
 With one exception, we will generate new repo sources and path to
 manifest if we are using update (patching feature in 6.0).

 Starting from 6.1 this configuration will be populated by tasks, which is
 a part of granular deployment
 workflow and replacement of configuration will lead to inability to use
 partial graph execution API.
 Ofcourse it is possible to hack around and make it work, but imo we need
 generic solution.

 Next problem - if user will upload replaced information, changes on
 cluster attributes, or networks, wont be reflected in deployment anymore
 and it constantly leads to problems for deployment engineers that are using
 fuel.

 What if user want to add data, and use generated of networks, attributes,
 etc?
 - it may be required as a part of manual plugin installation (ha_fencing
 requires a lot of configuration to be added into astute.yaml),
 - or you need to substitute networking data, e.g add specific parameters
 for linux bridges

 So given all this, i think that we should not substitute all information,
 but only part that is present in
 redefined info, and if there is additional parameters they will be simply
 merged into generated info

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Dmitriy Shulyak
On Thu, Jan 22, 2015 at 7:59 PM, Evgeniy L e...@mirantis.com wrote:

 The problem with merging is usually it's not clear how system performs
 merging.
 For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
 3}]}, and I want
 {'list': [{'k': 4}]} to be merged, what system should do? Replace the list
 or add {'k': 4}?
 Both cases should be covered.

 What if we will replace based on root level? It feels enough for me.

Most of the users don't remember all of the keys, usually user gets the
 defaults, and
 changes some values in place, in this case we should ask user to remove
 the rest
 of the fields.

 And we are not going to force them delete something - if all information
is present than it is what user actually wants.

The only solution which I see is to separate the data from the graph, not
 to send
 this information to user.

Probably, i will follow same approach that is used for repo generation,
mainly because it is quite usefull for debuging - to see
how tasks are generated, but it doesnt solves two additional points:
1. There is constantly some data in nailgun becomes invalid just because we
are asking user to overwrite everything
(most common case is allocated ip addresses)
2. What if you only need to add some data, like in fencing plugin? It will
mean that such cluster is not going to be supportable,
what if we will want to upgrade that cluster and new serializer should be
used? i think there is even warning on UI.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Dmitriy Shulyak
On Tue, Jan 27, 2015 at 10:47 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 This is an interesting topic. As per our discussions earlier, I suggest
 that in the future we move to different serializers for each granule of our
 deployment, so that we do not need to drag a lot of senseless data into
 particular task being executed. Say, we have a fencing task, which has a
 serializer module written in python. This module is imported by Nailgun and
 what it actually does, it executes specific Nailgun core methods that
 access database or other sources of information and retrieve data in the
 way this task wants it instead of adjusting the task to the only
 'astute.yaml'.


I like this idea, and to make things easier we may provide read only access
for plugins, but i am not sure that everyone will agree
to expose database to distributed task serializers. It may be quite fragile
and we wont be able to change anything internally, consider
refactoring of volumes or networks.

On the other hand if we will be able to make single public interface for
inventory (this is how i am calling part of nailgun that is reponsible
for cluster information storage) and use that interface (through REST Api
??) in component that will be responsible for deployment serialization and
execution.

Basically, what i am saying is that we need to split nailgun to
microservices, and then reuse that api in plugins or in config generators
right in library.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-22 Thread Evgeniy L
Hi Dmitry,

The problem with merging is usually it's not clear how system performs
merging.
For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
3}]}, and I want
{'list': [{'k': 4}]} to be merged, what system should do? Replace the list
or add {'k': 4}?
Both cases should be covered.

Most of the users don't remember all of the keys, usually user gets the
defaults, and
changes some values in place, in this case we should ask user to remove the
rest
of the fields.

The only solution which I see is to separate the data from the graph, not
to send
this information to user.

Thanks,

On Thu, Jan 22, 2015 at 5:18 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Hi guys,

 I want to discuss the way we are working with deployment configuration
 that were redefined for cluster.

 In case it was redefined by API - we are using that information instead of
 generated.
 With one exception, we will generate new repo sources and path to manifest
 if we are using update (patching feature in 6.0).

 Starting from 6.1 this configuration will be populated by tasks, which is
 a part of granular deployment
 workflow and replacement of configuration will lead to inability to use
 partial graph execution API.
 Ofcourse it is possible to hack around and make it work, but imo we need
 generic solution.

 Next problem - if user will upload replaced information, changes on
 cluster attributes, or networks, wont be reflected in deployment anymore
 and it constantly leads to problems for deployment engineers that are using
 fuel.

 What if user want to add data, and use generated of networks, attributes,
 etc?
 - it may be required as a part of manual plugin installation (ha_fencing
 requires a lot of configuration to be added into astute.yaml),
 - or you need to substitute networking data, e.g add specific parameters
 for linux bridges

 So given all this, i think that we should not substitute all information,
 but only part that is present in
 redefined info, and if there is additional parameters they will be simply
 merged into generated info

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev