This is an interesting topic. As per our discussions earlier, I suggest
that in the future we move to different serializers for each granule of our
deployment, so that we do not need to drag a lot of senseless data into
particular task being executed. Say, we have a fencing task, which has a
serializer module written in python. This module is imported by Nailgun and
what it actually does, it executes specific Nailgun core methods that
access database or other sources of information and retrieve data in the
way this task wants it instead of adjusting the task to the only

On Thu, Jan 22, 2015 at 8:59 PM, Evgeniy L <> wrote:

> Hi Dmitry,
> The problem with merging is usually it's not clear how system performs
> merging.
> For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
> 3}]}, and I want
> {'list': [{'k': 4}]} to be merged, what system should do? Replace the list
> or add {'k': 4}?
> Both cases should be covered.
> Most of the users don't remember all of the keys, usually user gets the
> defaults, and
> changes some values in place, in this case we should ask user to remove
> the rest
> of the fields.
> The only solution which I see is to separate the data from the graph, not
> to send
> this information to user.
> Thanks,
> On Thu, Jan 22, 2015 at 5:18 PM, Dmitriy Shulyak <>
> wrote:
>> Hi guys,
>> I want to discuss the way we are working with deployment configuration
>> that were redefined for cluster.
>> In case it was redefined by API - we are using that information instead
>> of generated.
>> With one exception, we will generate new repo sources and path to
>> manifest if we are using update (patching feature in 6.0).
>> Starting from 6.1 this configuration will be populated by tasks, which is
>> a part of granular deployment
>> workflow and replacement of configuration will lead to inability to use
>> partial graph execution API.
>> Ofcourse it is possible to hack around and make it work, but imo we need
>> generic solution.
>> Next problem - if user will upload replaced information, changes on
>> cluster attributes, or networks, wont be reflected in deployment anymore
>> and it constantly leads to problems for deployment engineers that are using
>> fuel.
>> What if user want to add data, and use generated of networks, attributes,
>> etc?
>> - it may be required as a part of manual plugin installation (ha_fencing
>> requires a lot of configuration to be added into astute.yaml),
>> - or you need to substitute networking data, e.g add specific parameters
>> for linux bridges
>> So given all this, i think that we should not substitute all information,
>> but only part that is present in
>> redefined info, and if there is additional parameters they will be simply
>> merged into generated info
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:

Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia, <>
OpenStack Development Mailing List (not for usage questions)

Reply via email to