On Tue, Jan 27, 2015 at 10:47 AM, Vladimir Kuklin <vkuk...@mirantis.com> wrote:
> This is an interesting topic. As per our discussions earlier, I suggest > that in the future we move to different serializers for each granule of our > deployment, so that we do not need to drag a lot of senseless data into > particular task being executed. Say, we have a fencing task, which has a > serializer module written in python. This module is imported by Nailgun and > what it actually does, it executes specific Nailgun core methods that > access database or other sources of information and retrieve data in the > way this task wants it instead of adjusting the task to the only > 'astute.yaml'. I like this idea, and to make things easier we may provide read only access for plugins, but i am not sure that everyone will agree to expose database to distributed task serializers. It may be quite fragile and we wont be able to change anything internally, consider refactoring of volumes or networks. On the other hand if we will be able to make single public interface for inventory (this is how i am calling part of nailgun that is reponsible for cluster information storage) and use that interface (through REST Api ??) in component that will be responsible for deployment serialization and execution. Basically, what i am saying is that we need to split nailgun to microservices, and then reuse that api in plugins or in config generators right in library.
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev