Re: [openstack-dev] [Fuel] Getting rid of ISO

2016-09-07 Thread Oleg Gelbukh
Congratulations, Vladimir, that's a huge step in a right direction for Fuel.

--
Best regards,
Oleg Gelbukh

On Wed, Sep 7, 2016 at 6:47 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I'm glad to announce that we have working BVT jobs on Fuel CI that do not
> use ISO but instead deploy Fuel admin node from packages onto vanilla
> Centos 7.
>
> Please take a look at [1]. There are jobs '10.0.repos.*' [2], [3], [4].
>
> We continue to work on re-implementing review jobs like this one [5] for
> example.
>
>
> [1] https://ci.fuel-infra.org/view/BVT/
> [2] https://ci.fuel-infra.org/view/BVT/job/10.0.repos.snapshot/
> [3] https://ci.fuel-infra.org/view/BVT/job/10.0.repos.main.ubuntu.bvt_2/
> [4] https://ci.fuel-infra.org/view/BVT/job/10.0.repos.main.
> ubuntu.smoke_neutron/
> [5] https://ci.fuel-infra.org/job/master.fuel-astute.pkgs.
> ubuntu.review_astute_patched/
>
>
>
>
> Vladimir Kozhukalov
>
> On Thu, Sep 1, 2016 at 1:13 PM, Roman Prykhodchenko <m...@romcheg.me> wrote:
>
>> This is so awesome! Thanks!
>>
>> On Tue, Aug 16, 2016 at 4:30 PM Jay Pipes <jaypi...@gmail.com> wrote:
>>
>>> On 08/16/2016 04:58 AM, Vladimir Kozhukalov wrote:
>>> > Dear colleagues,
>>> >
>>> > We finally have working custom deployment job that deploys Fuel admin
>>> > node using online RPM repositories (not ISO) on vanilla Centos 7.0.
>>>
>>> Bravo! :)
>>>
>>> > Currently all Fuel system and deployment tests use ISO and we are
>>> > planning to re-implement all these jobs (including BVT, SWARM, and Fuel
>>> > CI jobs) to exclude ISO from the pipeline. That will allow us to get
>>> rid
>>> > of ISO as our deliverable and instead rely totally on package
>>> > repositories. Linux distributions like Ubuntu, Debian, RHEL, etc. are
>>> > already delivered via ISO/qcow2/etc. images and we'd better stop
>>> > reinventing a wheel and support our own ISO build code. That will allow
>>> > us to make Fuel admin node deployment more flexible.
>>> >
>>> > I will infrom about our next steps here in the thread.
>>>
>>> Thanks, Vova, this is an excellent step forward for ease-of-use with
>>> Fuel.
>>>
>>> Nice work,
>>> -jay
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel-octane] Nominate Sergey Abramov to fuel-octane core

2016-07-21 Thread Oleg Gelbukh
+1 here

Sergey's performance and quality of the code he submitted are impressive.
Please, keep going.

--
Best regards,
Oleg Gelbukh

On Thu, Jul 21, 2016 at 10:21 AM, Artur Svechnikov <asvechni...@mirantis.com
> wrote:

> +1
>
> Best regards,
> Svechnikov Artur
>
> On Thu, Jul 21, 2016 at 12:10 AM, Ilya Kharin <ikha...@mirantis.com>
> wrote:
>
>> Hello,
>>
>> I would like to nominate Sergey Abramov to fuel-octane core due to his
>> significant contribution to the project [1] and [2].
>>
>> Best regards,
>> Ilya Kharin.
>>
>> [1] http://stackalytics.com/report/contribution/fuel-octane/90
>> [2]
>> http://stackalytics.com/?release=all=fuel-octane=marks_id=sabramov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-15 Thread Oleg Gelbukh
Jeremy, thank you, that's excellent news. The Infra team is doing awesome
work to improve the processes in all possible ways.

Andreas, I will take a closer look, but it seems to be exactly what I had
in mind. Thanks for sharing!

--
Best regards,
Oleg Gelbukh

On Fri, Apr 15, 2016 at 10:29 AM, Andreas Jaeger <a...@suse.com> wrote:

> On 04/14/2016 06:30 PM, Jeremy Stanley wrote:
>
>> On 2016-04-14 12:57:38 +0300 (+0300), Oleg Gelbukh wrote:
>>
>>> The thread I'm referring to in the prev message is:
>>>
>>> http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html
>>>
>>
>> At this point it's probably no longer a concern. We don't (and
>> haven't for some time) really support pip versions as old as the
>> ones which predate prerelease identification in their version
>> parsing so could probably just start running the same sdist
>> publication to PyPI for prereleases as we do for full release
>> version tags.
>>
>
> this one merged recently:
> https://review.openstack.org/300124
>
> I think it does what you describe - or is anything else to do?
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Oleg Gelbukh
The thread I'm referring to in the prev message is:
http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Thu, Apr 14, 2016 at 12:56 PM, Oleg Gelbukh <ogelb...@mirantis.com>
wrote:

> Hi,
>
> I'm sorry for replying to this old thread, but I would really like to see
> this moving.
>
> There's a 'pre-release' pipeline in Zuul which serves exactly that
> purpose: handle pre-release tags (beta-versions). However, per this thread,
> it is not recommended due to possible issues with pip unable to
> differentiate pre-release versions from main releases.
>
> Another option here is to publish minor versions of the package, i.e.
> start with 9.0.0 early, and then increase to 9.0.1 etc once the development
> progresses.
>
> --
> Best regards,
> Oleg Gelbukh
> Mirantis Inc.
>
> On Thu, Jan 21, 2016 at 11:52 AM, Yuriy Taraday <yorik@gmail.com>
> wrote:
>
>> By the way, it would be very helpful for testing external tools if we had
>> 7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
>> with a "stable/7.0.1" branch instead of "7.0.1" tag.
>>
>> On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko <m...@romcheg.me>
>> wrote:
>>
>>> Releasing a beta version sounds like a good plan but does OpenStack
>>> Infra actually support this?
>>>
>>> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh <ogelb...@mirantis.com>
>>> написав(ла):
>>> >
>>> > Hi,
>>> >
>>> > Currently we're experiencing issues with Python dependencies of our
>>> package (fuel-octane), specifically between fuelclient's dependencies and
>>> keystoneclient dependencies.
>>> >
>>> > New keystoneclient is required to work with the new version of Nailgun
>>> due to introduction of SSL in the latter. On the other hand, fuelclient is
>>> released along with the main release of Fuel, and the latest version
>>> available from PyPI is 7.0.0, and it has very old dependencies (based on
>>> packages available in centos6/python26).
>>> >
>>> > The solution I'd like to propose is to release beta version of
>>> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
>>> pip/tox, this will allow to run unittests against the proper set of
>>> requirements. On the other hand, it will not break the users consuming the
>>> latest stable (7.0.0) version with old requirements from PyPI.
>>> >
>>> > Please, share your thoughts and considerations. If no objections, I
>>> will create a corresponding bug/blueprint against fuelclient to be fixed in
>>> the current release cycle.
>>> >
>>> > --
>>> > Best regards,
>>> > Oleg Gelbukh
>>> > Mirantis
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Oleg Gelbukh
Hi,

I'm sorry for replying to this old thread, but I would really like to see
this moving.

There's a 'pre-release' pipeline in Zuul which serves exactly that purpose:
handle pre-release tags (beta-versions). However, per this thread, it is
not recommended due to possible issues with pip unable to differentiate
pre-release versions from main releases.

Another option here is to publish minor versions of the package, i.e. start
with 9.0.0 early, and then increase to 9.0.1 etc once the development
progresses.

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Thu, Jan 21, 2016 at 11:52 AM, Yuriy Taraday <yorik@gmail.com> wrote:

> By the way, it would be very helpful for testing external tools if we had
> 7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
> with a "stable/7.0.1" branch instead of "7.0.1" tag.
>
> On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko <m...@romcheg.me> wrote:
>
>> Releasing a beta version sounds like a good plan but does OpenStack Infra
>> actually support this?
>>
>> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh <ogelb...@mirantis.com>
>> написав(ла):
>> >
>> > Hi,
>> >
>> > Currently we're experiencing issues with Python dependencies of our
>> package (fuel-octane), specifically between fuelclient's dependencies and
>> keystoneclient dependencies.
>> >
>> > New keystoneclient is required to work with the new version of Nailgun
>> due to introduction of SSL in the latter. On the other hand, fuelclient is
>> released along with the main release of Fuel, and the latest version
>> available from PyPI is 7.0.0, and it has very old dependencies (based on
>> packages available in centos6/python26).
>> >
>> > The solution I'd like to propose is to release beta version of
>> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
>> pip/tox, this will allow to run unittests against the proper set of
>> requirements. On the other hand, it will not break the users consuming the
>> latest stable (7.0.0) version with old requirements from PyPI.
>> >
>> > Please, share your thoughts and considerations. If no objections, I
>> will create a corresponding bug/blueprint against fuelclient to be fixed in
>> the current release cycle.
>> >
>> > --
>> > Best regards,
>> > Oleg Gelbukh
>> > Mirantis
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][ConfigDB] Separating node and cluster serialized data

2016-04-01 Thread Oleg Gelbukh
Bogdan,

I mostly agree with you on this. The only data that might originate from a
node is discovery-related parameters, like CPU/disks/NICs architecture and
such.

However, at the moment the deployment data is partially generated at every
node (i.e. globals.yaml, override/plugins/* and some other files) and is
not exposed in any way externally. But since this data is required to
integrate with 3rd-party configuration management tools, we create an
interim solution to make them available 'as is'.

This situation should change in the next few months, and then nodes shall
be moved to purely consumer role in the deployment data pipeline.

--
Best regards,
Oleg Gelbukh

On Fri, Apr 1, 2016 at 1:37 PM, Bogdan Dobrelya <bdobre...@mirantis.com>
wrote:

> On 04/01/2016 10:41 AM, Oleg Gelbukh wrote:
> > Andrew,
> >
> > This is an excellent idea. It is apparently more efficient and
> > error-proof to make the split not by the resulted data but at the time
> > it is actually generated. We will play with this idea a little bit, and
> > will come up with design proposal shortly.
> >
> > Meanwhile, please be informed that we already started testing the
> > solution based on the node-level data exposed via ConfigDB API extension
> > for Nailgun [1] [2]. I will keep you updated on our progress in that
> area.
>
> I strongly believe that nodes must only consume data, not provide one.
> And the data must be collected from its sources, which is Nailgun API
> extensions, like Andrew described.
>
> >
> > [1] Specification for Nailgun API for serialized facts
> > <https://review.openstack.org/284109>
> > [2] Spec for upload of deployment configuration to ConfigDB API
> > <https://review.openstack.org/286012>
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Thu, Mar 31, 2016 at 11:19 PM, Andrew Woodward <xar...@gmail.com
> > <mailto:xar...@gmail.com>> wrote:
> >
> > One of the problems we've faced with trying to plug-in ConfigDB is
> > trying to separate the cluster attributes from the node attributes
> > in the serialized output (ie astute.yaml)
> >
> > I started talking with Alex S about how we could separate them after
> > astute.yaml is prepared trying to ensure which was which we came
> > back uncertain that the results would be accurate.
> >
> > So I figured I'd go back to the source and see if there was a way to
> > know which keys belonged where. It turns out that we could solve the
> > problem in a simpler and more precise way than cutting them back
> > apart later.
> >
> > Looking over the deployment_serializers.py [1] the serialized data
> > follows a simple work flow
> >
> > iterate over every node in cluster
> >   if node is customized:
> > serialized_data = node.replaced_deployment_data
> >   else:
> > serialized_data = dict_merge(
> >   serialize_node(node),
> >   get_common_attrs(cluster))
> >
> > Taking this into mind, we can simply construct an extension to
> > expose these as an APIs so that we can consume them as a task in the
> > deployment graph.
> >
> > Cluster:
> > We can simply expose
> > DeploymentMultinodeSerializer().get_common_attrs(cluster)
> >
> > This would then be plumbed to the cluster level in ConfigDB
> >
> > Node:
> > if a Node has customized data, then we can return that at the node
> > level, this continues to work at the same as native since it most
> > likely has Cluster merged into it.
> >
> > otherwise we can return the serialized node with whichever of the
> > first 'role' the node has
> >
> > We would expose DeploymentMultinodeSerializer().serialize_node(node,
> > objects.Node.all_roles(node)[0])
> >
> > for our usage, we don't need to worry about the normal node role
> > combination as the data only influences 'role' and 'fail_if_error'
> > attributes, both are not consumed in the library.
> >
> >
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L93-L121
> > --
> >
> > --
> >
> > Andrew Woodward
> >
> > Mirantis
> >
> > Fuel Community Ambassador
> >
> > Ceph Community
> >
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
&

Re: [openstack-dev] [fuel][ConfigDB] Separating node and cluster serialized data

2016-04-01 Thread Oleg Gelbukh
Andrew,

This is an excellent idea. It is apparently more efficient and error-proof
to make the split not by the resulted data but at the time it is actually
generated. We will play with this idea a little bit, and will come up with
design proposal shortly.

Meanwhile, please be informed that we already started testing the solution
based on the node-level data exposed via ConfigDB API extension for Nailgun
[1] [2]. I will keep you updated on our progress in that area.

[1] Specification for Nailgun API for serialized facts
<https://review.openstack.org/284109>
[2] Spec for upload of deployment configuration to ConfigDB API
<https://review.openstack.org/286012>

--
Best regards,
Oleg Gelbukh

On Thu, Mar 31, 2016 at 11:19 PM, Andrew Woodward <xar...@gmail.com> wrote:

> One of the problems we've faced with trying to plug-in ConfigDB is trying
> to separate the cluster attributes from the node attributes in the
> serialized output (ie astute.yaml)
>
> I started talking with Alex S about how we could separate them after
> astute.yaml is prepared trying to ensure which was which we came back
> uncertain that the results would be accurate.
>
> So I figured I'd go back to the source and see if there was a way to know
> which keys belonged where. It turns out that we could solve the problem in
> a simpler and more precise way than cutting them back apart later.
>
> Looking over the deployment_serializers.py [1] the serialized data follows
> a simple work flow
>
> iterate over every node in cluster
>   if node is customized:
> serialized_data = node.replaced_deployment_data
>   else:
> serialized_data = dict_merge(
>   serialize_node(node),
>   get_common_attrs(cluster))
>
> Taking this into mind, we can simply construct an extension to expose
> these as an APIs so that we can consume them as a task in the deployment
> graph.
>
> Cluster:
> We can simply expose
> DeploymentMultinodeSerializer().get_common_attrs(cluster)
>
> This would then be plumbed to the cluster level in ConfigDB
>
> Node:
> if a Node has customized data, then we can return that at the node level,
> this continues to work at the same as native since it most likely has
> Cluster merged into it.
>
> otherwise we can return the serialized node with whichever of the first
> 'role' the node has
>
> We would expose DeploymentMultinodeSerializer().serialize_node(node,
> objects.Node.all_roles(node)[0])
>
> for our usage, we don't need to worry about the normal node role
> combination as the data only influences 'role' and 'fail_if_error'
> attributes, both are not consumed in the library.
>
>
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L93-L121
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] FFE request for ConfigDB service

2016-03-29 Thread Oleg Gelbukh
Greetings,

Please, be informed that the source code of Nailgun API extension has
landed to the designated repository [1]. Project was code named
'tuning-box'. We are working to integrate it into the build and testing
systems provided by Fuel infra. I suggest that the FFE can be closed.

I'd like to thank the community for the trust you've put in us. Hope we
laid a foundation for more flexible and modular architecture for the future
Fuel versions.

Sorry for the delay with this heads up.

[1] https://git.openstack.org/openstack/tuning-box.git

--
Best regards,
Oleg Gelbukh

On Fri, Mar 4, 2016 at 12:27 AM, Dmitry Borodaenko <dborodae...@mirantis.com
> wrote:

> Granted, merge deadline March 24, no impact expected in core components
> (fuel-library, fuel-web, fuel-ui).
>
> --
> Dmitry Borodaenko
>
>
> On Tue, Mar 01, 2016 at 04:22:05PM +0300, Oleg Gelbukh wrote:
> > Greetings,
> >
> > As you might know, we are working on centralised storage for
> > deployment configuration data in Fuel. Such store will allow external
> > 3rd-party services to consume the entirety of settings provided by
> > Fuel to deployment mechanisms on target nodes. It will also allow to
> > manage and override the settings via simple client application.
> >
> > This change is required to enable Puppet Master based LCM solution.
> >
> > We request a FFE for this feature for 3 weeks, until Mar 24. By that
> > time, we will provide tested solution in accordance with the following
> > specifications [1] [2]
> >
> > The feature includes 3 main components:
> > 1. Extension to Nailgun API with separate DB structure to store
> serialized data
> > 2. Backend library for Hiera to consume the API in question to lookup
> > values of the certain parameters
> > 3. Astute task to download all serialized data from nodes and upload
> > them to ConfigDB API upon successful deployment of cluster
> >
> > Since introduction of stevedore-based extensions [3], we could develop
> > extensions in separate code repos. This makes change to Nailgun
> > non-intrusive to core code.
> > Backend library will be implemented in fuel-library code tree and
> > packaged as a sub-package. This change also doesn't require changes in
> > the core code.
> > Astute task will add a task in the flow. We will make this task
> > configurable, i.e. normally this code path won't be used at all. It
> > also won't touch core code of Astute.
> >
> > Overall, I consider this change as low risk for integrity and timeline
> > of the release.
> >
> > Please, consider our request and share concerns so we could properly
> > resolve them.
> >
> > [1]
> https://blueprints.launchpad.net/fuel/+spec/upload-deployment-facts-to-configdb
> > [2]
> https://blueprints.launchpad.net/fuel/+spec/serialized-facts-nailgun-api
> > [3]
> https://blueprints.launchpad.net/fuel/+spec/stevedore-extensions-discovery
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> > Mirantis Inc.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] FFE request for ConfigDB service

2016-03-01 Thread Oleg Gelbukh
Greetings,

As you might know, we are working on centralised storage for
deployment configuration data in Fuel. Such store will allow external
3rd-party services to consume the entirety of settings provided by
Fuel to deployment mechanisms on target nodes. It will also allow to
manage and override the settings via simple client application.

This change is required to enable Puppet Master based LCM solution.

We request a FFE for this feature for 3 weeks, until Mar 24. By that
time, we will provide tested solution in accordance with the following
specifications [1] [2]

The feature includes 3 main components:
1. Extension to Nailgun API with separate DB structure to store serialized data
2. Backend library for Hiera to consume the API in question to lookup
values of the certain parameters
3. Astute task to download all serialized data from nodes and upload
them to ConfigDB API upon successful deployment of cluster

Since introduction of stevedore-based extensions [3], we could develop
extensions in separate code repos. This makes change to Nailgun
non-intrusive to core code.
Backend library will be implemented in fuel-library code tree and
packaged as a sub-package. This change also doesn't require changes in
the core code.
Astute task will add a task in the flow. We will make this task
configurable, i.e. normally this code path won't be used at all. It
also won't touch core code of Astute.

Overall, I consider this change as low risk for integrity and timeline
of the release.

Please, consider our request and share concerns so we could properly
resolve them.

[1] 
https://blueprints.launchpad.net/fuel/+spec/upload-deployment-facts-to-configdb
[2] https://blueprints.launchpad.net/fuel/+spec/serialized-facts-nailgun-api
[3] https://blueprints.launchpad.net/fuel/+spec/stevedore-extensions-discovery

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-11 Thread Oleg Gelbukh
Hi,

The Octane team has some issues with lacking definition of what the
'release' is in Fuel (in terms of managed environments). I started an
etherpad [1] to summarize the entities/artifacts that consistute a
'release' at the moment. Based on this definition, we can localize and
define entry points where the actual 'release'-specific code could be
separated from universal frameworks in Nailgun and other component of Fuel.
Thus, we could better manage different releases with a single version of
Nailgun code, which is essential for the upgrade workflow.

I agree with Alex on that at least a part of the composition layer is bound
to be 'release'-specific, especially the puppet modules that deploy
OpenStack and other managed components. On the other hand, certain parts of
the fuel-library have nothing to do with the managed 'release', and concern
the Fuel Admin node itself. It would be useful to make separation along
those lines as well.

[1] https://etherpad.openstack.org/p/fuel-release-definition

--
Best regards,
Oleg Gelbukh

On Thu, Feb 11, 2016 at 12:02 PM, Aleksandr Didenko <adide...@mirantis.com>
wrote:

> Hi,
>
> > So what is open? The composition layer.
>
> We can have different composition layers for every release and it's
> already implemented in releases - separate puppet modules/manifests dir for
> every release.
>
> > Currently, we just abandon support for previous versions in the
> composition layer and leave them to only be monuments in the
> stable/ series branches for maintenance. If we instead started
> making changes (forwards or backwards that) change the calls based on the
> openstack version [5] then we would be able to change the calls based on
> then needs of that release, and the puppet-openstack modules we are working
> with.
>
> So we'll have tons of conditionals in composition layer, right? Even if
> some puppet-openstack class have just one new parameter in new release,
> then we'll have to write a conditional and duplicate class declaration. Or
> write complex parameters hash definitions/merges and use
> create_resources(). The more releases we want to support the more
> complicated composition layer will become. That won't make contribution to
> fuel-library easier and even can greatly reduce development speed. Also are
> we going to add new features to stable releases using this workflow with
> single composition layer?
>
> > Testing master while keeping stable. Given the ability to conditional
> what source of openstack bits, which versions of manifests we can start
> testing both master and keep health on stable. This would help accelerate
> both fuel development and deploying and testing development versions of
> openstack
>
> I'm sorry, but I don't see how we can accelerate things by making
> composition layer more and more complicated. If we're going to run CI and
> swarm for all of the supported releases on the ISO, that would rather
> decrease speed of development and testing drastically. Also aren't we
> "testing both master and keep health on stable" right now by running tests
> for master and stable versions of Fuel?
>
> > Deploying stable and upgrading later. Again given the ability to deploy
> multiple OpenStack versions within the same Fuel version, teams focused on
> upgrades can take advantage of the latest enhancements in fuel to work the
> upgrade process more easily, as an added benefit this would eventually lead
> to better support for end user upgrades too.
>
> Using the same composition layers is not required for this. Also how it
> differs from using the current upgrade procedure? When you have, for
> instance, 7.0 release and then upgrade to 8.0, so basically result is the
> same - you have two releases in Fuel, 2 directories with manifests, 2 repos
> with packages.
>
> > Deploying older versions, in the odd case that we need to take advantage
> of older OpenStack releases like in the case of Kilo with a newer version
> of Fuel we can easily maintain that version too as we can keep the older
> cases around in the composition layer with out adding much burden on the
> other components.
>
> Using the same composition layers is not required for this, "we can keep
> the older cases around" in the composition layer of previous version.
>
> Also, how many releases we're going to support? All of them starting from
> Kilo? What about ISO size? What about CI, infra (required HW), acceptance
> testing, etc impact?
>
> Regards,
> Alex
>
>
>
> On Thu, Feb 11, 2016 at 6:57 AM, Andrew Woodward <xar...@gmail.com> wrote:
>
>> Right now master (targeting 9.0) is still deploying liberty and there is
>> active work going on to support both Kilo and Mitaka. On the review queue
>> are changes that 

[openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-01-20 Thread Oleg Gelbukh
Hi,

Currently we're experiencing issues with Python dependencies of our package
(fuel-octane), specifically between fuelclient's dependencies and
keystoneclient dependencies.

New keystoneclient is required to work with the new version of Nailgun due
to introduction of SSL in the latter. On the other hand, fuelclient is
released along with the main release of Fuel, and the latest version
available from PyPI is 7.0.0, and it has very old dependencies (based on
packages available in centos6/python26).

The solution I'd like to propose is to release beta version of fuelclient
(8.0.0b1) with updated requirements ASAP. With --pre flag to pip/tox, this
will allow to run unittests against the proper set of requirements. On the
other hand, it will not break the users consuming the latest stable (7.0.0)
version with old requirements from PyPI.

Please, share your thoughts and considerations. If no objections, I will
create a corresponding bug/blueprint against fuelclient to be fixed in the
current release cycle.

--
Best regards,
Oleg Gelbukh
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-12 Thread Oleg Gelbukh
I think we need to find a way to:

1) verify the size of snapshot without actually making it and compare to
the available disk space beforehand.
2) refuse to create snapshot if space is insufficient and notify user
(otherwise it breaks Admin node as we have seen)
3) provide a way to prioritize elements of the snapshot and exclude them
based on the priorities or user choice.

This will allow for better and safer UX with the snapshot.

--
Best regards,
Oleg Gelbukh

On Tue, Jan 12, 2016 at 1:47 PM, Maciej Kwiek <mkw...@mirantis.com> wrote:

> Hi!
>
> I need some advice on how to tackle this issue. There is a bug [1]
> describing the problem with creating a diagnostic snapshot. The issue is
> that /var/log has 100GB available, while /var (where diagnostic snapshot is
> being generated - /var/www/nailgun/dump/fuel-snapshot according to [2]) has
> 10GB available, so dumping the logs can be an issue when logs size exceed
> free space in /var.
>
> There are several things we could do, but I am unsure on which course to
> take. Should we
> a) Allocate more disk space for /var/www (or for whole /var)?
> b) Make the snapshot location share the diskspace of /var/log?
> c) Something else? What?
>
> Please share your thoughts on this.
>
> Cheers,
> Maciej Kwiek
>
> [1] https://bugs.launchpad.net/fuel/+bug/1529182
> [2]
> https://github.com/openstack/fuel-web/blob/2855a9ba925c146b4802ab3cd2185f1dce2d8a6a/nailgun/nailgun/settings.yaml#L717
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removal of support for nova-network

2015-12-22 Thread Oleg Gelbukh
Sergii,

Nailgun will still have data of clusters with old releases, should they be
in the database backup. And it still has to be able to manage them.

--
Best regards,
Oleg Gelbukh

On Tue, Dec 22, 2015 at 11:58 AM, Sergii Golovatiuk <
sgolovat...@mirantis.com> wrote:

> Hi,
>
> There won't be upgrade to 8.0. User will be able to backup and load data
> to a new master node. nova-network has been deprecated for 2 releases so we
> can remove it. If we remove it we can remove tests from acceptance testing
> as well as from auto-tests so it should remove tech debt so will release
> our QA/CI resources to focus on other tests.
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Tue, Dec 22, 2015 at 12:28 AM, Evgeniy L <e...@mirantis.com> wrote:
>
>> Hi,
>>
>> We mustn't touch Nailgun's logic, otherwise after upgrade user won't be
>> able
>> to manage her/his old nova Cluster. So lets just remove it from UI.
>> Also as far as I know we should provide a way to manage old clusters not
>> for a release, but for a couple of years.
>>
>> Thanks,
>>
>> On Tue, Dec 22, 2015 at 10:40 AM, Igor Kalnitsky <ikalnit...@mirantis.com
>> > wrote:
>>
>>> I don't think it's a good idea to drop support of 7.0 nova-network
>>> setup in 8.0. We should keep compatibility for at least one release.
>>>
>>> On Tue, Dec 22, 2015 at 9:15 AM, Aleksey Kasatkin
>>> <akasat...@mirantis.com> wrote:
>>> > Sergii,
>>> >
>>> > We could remove it completely from nailgun if support for 7.0 and
>>> earlier is
>>> > not required.
>>> >
>>> >
>>> > Aleksey Kasatkin
>>> >
>>> >
>>> > On Tue, Dec 22, 2015 at 3:27 AM, Sergii Golovatiuk
>>> > <sgolovat...@mirantis.com> wrote:
>>> >>
>>> >> Hi,
>>> >>
>>> >> Finally we can deprecate nova-network ...
>>> >> We should remove it from UI, nailgun logic and tests to have less
>>> >> technical debt.
>>> >>
>>> >> --
>>> >> Best regards,
>>> >> Sergii Golovatiuk,
>>> >> Skype #golserge
>>> >> IRC #holser
>>> >>
>>> >> On Mon, Dec 21, 2015 at 5:01 PM, Sheena Gregson <
>>> sgreg...@mirantis.com>
>>> >> wrote:
>>> >>>
>>> >>> Hey guys –
>>> >>>
>>> >>>
>>> >>>
>>> >>> I know this has been a topic of a lot of discussion – Adrian
>>> informed me
>>> >>> on Friday that QA has confirmed the multi-hypervisor use case has
>>> been
>>> >>> tested successfully without nova-network, so we can finally
>>> deprecate it!
>>> >>>
>>> >>>
>>> >>>
>>> >>> Users who want to deploy multiple hypervisors will need to use the
>>> Fuel
>>> >>> DVS plugin (Neutron ML2 driver) to support their vCenter computes
>>> and the
>>> >>> KVM/QEMU computes can use Neutron + GRE/VXLAN.
>>> >>>
>>> >>>
>>> >>>
>>> >>> I’ve created a kind of “cover all the things” bug here:
>>> >>> https://bugs.launchpad.net/fuel/+bug/1528407.  Given the state of
>>> >>> nova-network right now in Fuel, I have marked it as Critical.
>>> >>>
>>> >>>
>>> >>>
>>> >>> Let’s start the conversation on here and make sure all the bases are
>>> >>> covered – if additional bugs need to be logged or there’s
>>> administrative
>>> >>> overhead, let me know and I’ll be happy to help out!
>>> >>>
>>> >>>
>>> >>>
>>> >>> Sheena Gregson | Sr. Product Manager | Mirantis
>>> >>>
>>> >>> p: +1 650 646 3302 | e: sgreg...@mirantis.com
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> __
>>> >>> OpenStack Development Mailing List (not for usage questions)
>>> >>> Unsubscribe:
>>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >>> http://lists.openstack.org/cgi-bin/mailman/lis

Re: [openstack-dev] [Fuel][Solar] SolarDB/ConfigDB place in Fuel

2015-12-21 Thread Oleg Gelbukh
The problem with this approach is that we can't manage the interfaces
between components without changing the code of 2+ components (i.e. one
that provides the data and all that consume it).

I also don't like polling model for data processors. In my understanding,
components should push their changes through the pipeline. Although this is
pure implementation detail and is not really important ATM.

The point is that for Solar integration, we still need integration points,
and the less of them we have, the more simple the transition is going to
be..

--
Best regards,
Oleg Gelbukh

On Mon, Dec 21, 2015 at 11:32 AM, Evgeniy L <e...@mirantis.com> wrote:

> Hi Oleg,
>
> I understand the concern, but in case of integration specifically with
> Solar,
> I don't see any reasons to add ConfigDB, because Solar by itself is a
> ConfigDB.
> At the same time I would agree that there might be a case, when user uses
> Zookeeper/Puppet Master/CMDB as a data store, in this case we should store
> the data directly in those services, without keeping them in yet another
> storage.
>
> So the flow will look like this:
> Components ->
> Data get polled by data processors ->
> | Solar data processor puts the data into Solar in its format
> | Zookeeper data processor puts the data into Zookeeper in its format
> | Custom CMDB data processor puts the data into CMDB in its own format
>
> Thanks,
>
> On Fri, Dec 18, 2015 at 7:00 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
>
>> Hi,
>>
>> The idea behind configdb is that it is independent component that defines
>> data flows between other components of the system. It has no knowledge
>> about those components or specifics of their data. Data formats are defined
>> by components themselves via schemas/templates and can be changed at any
>> time (i.e. don't require code changes).
>>
>> Important 'pro' having ConfigDB separate from Solar is that it will
>> simplify transition from current Fuel architecture by breaking it into more
>> definite stages and reducing the number of components Solar have to be
>> integrated with.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Wed, Dec 16, 2015 at 4:38 PM, Evgeniy L <e...@mirantis.com> wrote:
>>
>>> Hi Dmitry,
>>>
>>> I also don't think that we should duplicate the data in configdb,
>>> because in this case there will be +2 additional interfaces which
>>> will require to covert the data into configdb and after that from
>>> configdb to Solar, which seems redundant overhead.
>>>
>>> But we should be able to put the data directly to user's
>>> CMDB/ZooKeeper/Puppet Master/etc.
>>>
>>> Thanks,
>>>
>>> On Wed, Dec 16, 2015 at 2:03 AM, Dmitriy Shulyak <dshul...@mirantis.com>
>>> wrote:
>>>
>>>> Hello folks,
>>>>
>>>> This topic is about configuration storage which will connect data
>>>> sources (nailgun/bareon/others) and orchestration. And right now we are
>>>> developing two projects that will overlap a bit.
>>>>
>>>> I understand there is not enough context to dive into this thread right
>>>> away, but i will appreciate if those people, who participated in design,
>>>> will add their opinions/clarifications on this matter.
>>>>
>>>> Main disagreements
>>>> ---
>>>> 1. configdb should be passive, writing to configdb is someone else
>>>> responsibility
>>>> + simpler implementation, easier to use
>>>> - we will need another component that will do writing, or split this
>>>> responsibility somehow
>>>>
>>>> 2. can be used without other solar components
>>>> + clear inteface between solar components and storage layer
>>>> - additional work required to design/refactor communication layer
>>>> between modules in solar
>>>> - some data will be duplicated between solar orchestrator layer and
>>>> configdb
>>>>
>>>> 3. templates for output
>>>> technical detail, can be added on top of solardb if required
>>>>
>>>> Similar functionality
>>>> --
>>>> 1. Hierachical storage
>>>> 2. Versioning of changes
>>>> 3. Possibility to overwrite config values
>>>> 4. Schema for inputs
>>>>
>>>> Overall it seems that we share same goals for both services,
>>>> the difference lies in organizational and technical implementation
>&

Re: [openstack-dev] [Fuel][Solar] SolarDB/ConfigDB place in Fuel

2015-12-18 Thread Oleg Gelbukh
Hi,

The idea behind configdb is that it is independent component that defines
data flows between other components of the system. It has no knowledge
about those components or specifics of their data. Data formats are defined
by components themselves via schemas/templates and can be changed at any
time (i.e. don't require code changes).

Important 'pro' having ConfigDB separate from Solar is that it will
simplify transition from current Fuel architecture by breaking it into more
definite stages and reducing the number of components Solar have to be
integrated with.

--
Best regards,
Oleg Gelbukh

On Wed, Dec 16, 2015 at 4:38 PM, Evgeniy L <e...@mirantis.com> wrote:

> Hi Dmitry,
>
> I also don't think that we should duplicate the data in configdb,
> because in this case there will be +2 additional interfaces which
> will require to covert the data into configdb and after that from
> configdb to Solar, which seems redundant overhead.
>
> But we should be able to put the data directly to user's
> CMDB/ZooKeeper/Puppet Master/etc.
>
> Thanks,
>
> On Wed, Dec 16, 2015 at 2:03 AM, Dmitriy Shulyak <dshul...@mirantis.com>
> wrote:
>
>> Hello folks,
>>
>> This topic is about configuration storage which will connect data sources
>> (nailgun/bareon/others) and orchestration. And right now we are developing
>> two projects that will overlap a bit.
>>
>> I understand there is not enough context to dive into this thread right
>> away, but i will appreciate if those people, who participated in design,
>> will add their opinions/clarifications on this matter.
>>
>> Main disagreements
>> ---
>> 1. configdb should be passive, writing to configdb is someone else
>> responsibility
>> + simpler implementation, easier to use
>> - we will need another component that will do writing, or split this
>> responsibility somehow
>>
>> 2. can be used without other solar components
>> + clear inteface between solar components and storage layer
>> - additional work required to design/refactor communication layer between
>> modules in solar
>> - some data will be duplicated between solar orchestrator layer and
>> configdb
>>
>> 3. templates for output
>> technical detail, can be added on top of solardb if required
>>
>> Similar functionality
>> --
>> 1. Hierachical storage
>> 2. Versioning of changes
>> 3. Possibility to overwrite config values
>> 4. Schema for inputs
>>
>> Overall it seems that we share same goals for both services,
>> the difference lies in organizational and technical implementation
>> details.
>>
>> Possible solutions
>> 
>> 1. develop configdb and solar with duplicated functionality
>> - at least 2 additional components will be added to the picture,
>> one is configdb, another one will need to sync data between configdb and
>> solar
>> - in some cases data in solar and configdb will be 100% duplicated
>> - different teams will work on same functionality
>> - integration of additional component for fuel will require integration
>> with
>> configdb and with solar
>> + configdb will be independent from solar orchestration/other components
>>
>> 2. make service out of solardb, allign with configdb use cases
>> + solardb will be independent from solar orchestration/other solar
>> components
>> + integration of fuel component will be easier than in 1st version
>> + clarity about components responsibility and new architecture
>> - redesign/refactoring communication between components in solar
>>
>> 3. do not use configdb/no extraction of solardb
>> - inproc communication, which can lead to coupled components (not the
>> case currently)
>> + faster implementation (no major changes required for integration with
>> fuel)
>> + clarity about components responsibility and new architecture
>>
>> Summary
>> -
>> For solar it makes no difference where data will come from: configdb or
>> data sources, but in overall fuel architecture it will lead to significant
>> complexity increase.
>> It would be the best to follow 2nd path, because in long term we don't
>> want tightly coupled components, but in nearest future we need to
>> concentrate
>> on:
>> - integration with fuel
>> - implementing policy engine
>> - polishing solar components
>> This is why i am not sure that we can spend time on 2nd path right now,
>> or even before 9.0.
>>
>>
>>
>> 

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-17 Thread Oleg Gelbukh
In fact, it seems that 9.2 is in the mix since the introduction of centos7.
Thus, all tests that have been made since then are made against 9.2. So,
upgrading it to 9.3 actually is a change that has to be blocked by FF/SCF.

Just my 2c.

--
Best regards,
Oleg Gelbukh

On Thu, Dec 17, 2015 at 12:13 PM, Evgeniy L <e...@mirantis.com> wrote:

> Hi Andrew,
>
> It doesn't look fair at all to say that we use Postgres specific feature
> for no reasons
> or as you said "just because we want".
> For example we used Arrays which fits pretty well for our roles usage,
> which improved
> readability and performance.
> Or try to fit into relational system something like that [1], I don't
> think that we will get
> a good result.
>
> P.S. sending a link to a holywar topic (schema vs schemaless), won't help
> to solve our
> specific problem with Postgres downgrading vs keeping old (new) version.
>
> [1]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
>
>
> On Tue, Dec 15, 2015 at 10:53 PM, Andrew Maksimov <amaksi...@mirantis.com>
> wrote:
>
>> +1 to Igor suggestion to downgrade Postgres to 9.2. Our users don't work
>> directly with Postgres, so there is no any deprecation of Fuel features.
>> Maintaining our own custom Postgres package just because we want "JSON
>> column" is not a rational decision. Come on, fuel is not a billing system
>> with thousands tables and special requirements to database. At least, we
>> should try to keep it simple and avoid unnecessary complication.
>>
>> PS
>>  BTW, some people suggest to avoid using  json columns, read [1]
>> PostgreSQL anti-patterns: unnecessary json columns.
>>
>> [1] -
>> http://blog.2ndquadrant.com/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/
>>
>> Regards,
>> Andrey Maximov
>> Fuel Project Manager
>>
>>
>> On Tue, Dec 15, 2015 at 9:34 PM, Vladimir Kuklin <vkuk...@mirantis.com>
>> wrote:
>>
>>> Folks
>>>
>>> Let me add my 2c here.
>>>
>>> I am for using Postgres 9.3. Here is an additional argument to the ones
>>> provided by Artem, Aleksandra and others.
>>>
>>> Fuel is being sometimes highly customized by our users for their
>>> specific needs. It has been Postgres 9.3 for a while and they might have as
>>> well gotten used to it and assumed by default that this would not change.
>>> So some of their respective features they are developing for their own sake
>>> may depend on Postgres 9.3 and we will never be able to tell the fraction
>>> of such use cases. Moreover, downgrading DBMS version of Fuel should be
>>> inevitably considered as a 'deprecation' of some features our software
>>> suite is providing to our users. This actually means that we MUST provide
>>> our users with a warning and deprecation period to allow them to adjust to
>>> these changes. Obviously, accidental change of Postgres version does not
>>> follow such a policy in any way. So I see no other ways except for getting
>>> back to Postgres 9.3.
>>>
>>>
>>> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky <ikalnit...@mirantis.com
>>> > wrote:
>>>
>>>> Hey Mike,
>>>>
>>>> Thanks for your input.
>>>>
>>>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>>>>
>>>> It still needs to fix the code, i.e. change ARRAY-specific queries
>>>> with JSON ones around the code. ;)
>>>>
>>>> > there's already a mostly finished PR for SQLAlchemy support in the
>>>> queue.
>>>>
>>>> Does it mean SQLAlchemy will have one unified interface to make JSON
>>>> queries? So we can use different backends if necessary?
>>>>
>>>> Thanks,
>>>> - Igor
>>>>
>>>> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer <mba...@redhat.com> wrote:
>>>> >
>>>> >
>>>> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>>>> >> Hey Julien,
>>>> >>
>>>> >>>
>>>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>>>> >>
>>>> >> I believe this blueprint is about DB for OpenStack cloud (we use
>>>> >> Galera now), while here we're talking about DB backend for Fuel
>>>> >> itself. Fuel has a separate node (so called Fuel Master) and we use
>>>> >> Postgre

Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-17 Thread Oleg Gelbukh
Evgeniy,

True, and I fully support merging this particular change as soon as
possible, i.e. the moment the 'master' is open for 9.0 development.

-Oleg

On Thu, Dec 17, 2015 at 12:28 PM, Evgeniy L <e...@mirantis.com> wrote:

> Hi Oleg,
>
> With the same degree of confidence we can say that anything we have in the
> beginning of
> the release cycle is not urgent enough. We pushed early branching
> specifically for
> such big changes as Docker removal/Changing repos structures and merging
> invasive patches
> for new release features.
>
> Vladimir Kuklin,
>
> I'm not sure what do you mean by "fixing 2 different environments"? With
> environment without
> containers it will simplify debugging process.
>
> Thanks,
>
> On Wed, Dec 16, 2015 at 10:12 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
>
>> Hi
>>
>> Although I agree that it should be done, the removal of Docker doesn't
>> seem an urgent feature to me. It is not blocking anything besides moving to
>> full package-based deployment of Fuel, as far as I understand. So it could
>> be easily delayed for one milestone, especially if it is already almost
>> done and submitted for review, so it could be merged fast before any other
>> significant changes land in 'master' after it is open.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Wed, Dec 16, 2015 at 8:56 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Vladimir,
>>>
>>> I have other activities planned for the time immediately after SCF
>>> (separating UI from fuel-web, maybe it is even more invasive :-)) and it is
>>> not a big deal to postpone this feature or another. I am against the
>>> approach itself of postponing something because it is too invasive. If we
>>> create stable branch master becomes open. That was our primary intention to
>>> open master earlier than later when we decided to move stable branch
>>> creation.
>>>
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin <vkuk...@mirantis.com>
>>> wrote:
>>>
>>>> Vladimir
>>>>
>>>> I am pretty much for removing docker, but I do not think that we should
>>>> startle our developers/QA folks with additional efforts on fixing 2
>>>> different environments. Let's just think from the point of development
>>>> velocity here and at delay such changes for at least after NY. Because if
>>>> we do it immediately after SCF there will be a whole bunch of holidays and
>>>> Russian holidays are Jan 1st-10th and you (who is the SME for docker
>>>> removal) will be offline. Do you really want to fix things instead of
>>>> enjoying holidays?
>>>>
>>>> On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L <e...@mirantis.com> wrote:
>>>>
>>>>> +1 to Vladimir Kozhukalov,
>>>>>
>>>>> Entire point of moving branches creation to SCF was to perform such
>>>>> changes as
>>>>> early as possible in the release, I see no reasons to wait for HCF.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
>>>>> vkozhuka...@mirantis.com> wrote:
>>>>>
>>>>>> -1
>>>>>>
>>>>>> We already discussed this and we have made a decision to move stable
>>>>>> branch creation from HCF to SCF. There were reasons for this. We agreed
>>>>>> that once stable branch is created, master becomes open for new features.
>>>>>> Let's avoid discussing this again.
>>>>>>
>>>>>> Vladimir Kozhukalov
>>>>>>
>>>>>> On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
>>>>>> bgaiful...@mirantis.com> wrote:
>>>>>>
>>>>>>> +1
>>>>>>>
>>>>>>> Regards,
>>>>>>> Bulat Gaifullin
>>>>>>> Mirantis Inc.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 15 Dec 2015, at 22:19, Andrew Maksimov <amaksi...@mirantis.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> +1
>>>>>>>
>>>>>>> Regards,
>>>>>

Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-16 Thread Oleg Gelbukh
Hi

Although I agree that it should be done, the removal of Docker doesn't seem
an urgent feature to me. It is not blocking anything besides moving to full
package-based deployment of Fuel, as far as I understand. So it could be
easily delayed for one milestone, especially if it is already almost done
and submitted for review, so it could be merged fast before any other
significant changes land in 'master' after it is open.

--
Best regards,
Oleg Gelbukh

On Wed, Dec 16, 2015 at 8:56 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Vladimir,
>
> I have other activities planned for the time immediately after SCF
> (separating UI from fuel-web, maybe it is even more invasive :-)) and it is
> not a big deal to postpone this feature or another. I am against the
> approach itself of postponing something because it is too invasive. If we
> create stable branch master becomes open. That was our primary intention to
> open master earlier than later when we decided to move stable branch
> creation.
>
>
>
>
> Vladimir Kozhukalov
>
> On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin <vkuk...@mirantis.com>
> wrote:
>
>> Vladimir
>>
>> I am pretty much for removing docker, but I do not think that we should
>> startle our developers/QA folks with additional efforts on fixing 2
>> different environments. Let's just think from the point of development
>> velocity here and at delay such changes for at least after NY. Because if
>> we do it immediately after SCF there will be a whole bunch of holidays and
>> Russian holidays are Jan 1st-10th and you (who is the SME for docker
>> removal) will be offline. Do you really want to fix things instead of
>> enjoying holidays?
>>
>> On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L <e...@mirantis.com> wrote:
>>
>>> +1 to Vladimir Kozhukalov,
>>>
>>> Entire point of moving branches creation to SCF was to perform such
>>> changes as
>>> early as possible in the release, I see no reasons to wait for HCF.
>>>
>>> Thanks,
>>>
>>> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
>>>> -1
>>>>
>>>> We already discussed this and we have made a decision to move stable
>>>> branch creation from HCF to SCF. There were reasons for this. We agreed
>>>> that once stable branch is created, master becomes open for new features.
>>>> Let's avoid discussing this again.
>>>>
>>>> Vladimir Kozhukalov
>>>>
>>>> On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
>>>> bgaiful...@mirantis.com> wrote:
>>>>
>>>>> +1
>>>>>
>>>>> Regards,
>>>>> Bulat Gaifullin
>>>>> Mirantis Inc.
>>>>>
>>>>>
>>>>>
>>>>> On 15 Dec 2015, at 22:19, Andrew Maksimov <amaksi...@mirantis.com>
>>>>> wrote:
>>>>>
>>>>> +1
>>>>>
>>>>> Regards,
>>>>> Andrey Maximov
>>>>> Fuel Project Manager
>>>>>
>>>>> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin <vkuk...@mirantis.com
>>>>> > wrote:
>>>>>
>>>>>> Folks
>>>>>>
>>>>>> This email is a proposal to push Docker containers removal from the
>>>>>> master node to the date beyond 8.0 HCF.
>>>>>>
>>>>>> Here is why I propose to do so.
>>>>>>
>>>>>> Removal of Docker is a rather invasive change and may introduce a lot
>>>>>> of regressions. It is well may affect how bugs are fixed - we might have 
>>>>>> 2
>>>>>> ways of fixing them, while during SCF of 8.0 this may affect velocity of
>>>>>> bug fixing as you need to fix bugs in master prior to fixing them in 
>>>>>> stable
>>>>>> branches. This actually may significantly increase our bugfixing pace and
>>>>>> put 8.0 GA release on risk.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Yours Faithfully,
>>>>>> Vladimir Kuklin,
>>>>>> Fuel Library Tech Lead,
>>>>>> Mirantis, Inc.
>>>>>> +7 (495) 640-49-04
>>>>>> +7 (926) 702-39-68
>>>>>> Skype kuklinvv
>>>>>> 35bk3, Voro

Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-12-15 Thread Oleg Gelbukh
I have a few changes in review [0] that implement a plan outlined in the
bug [1] for seamless merge of the new versioning schema (liberty-8.0). With
those changes merged in order, we should be OK without changing ISO in Fuel
infra.

I also have version of ISO with green BVT that incorporates changes listed
above. It could replace the current ISO in Fuel infra any time we're ready
for it. Currently I'm trying to get green system tests on it as well.

We just need to decide on what path we want to take.

[0]
https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1503663,n,z
[1] https://bugs.launchpad.net/fuel/+bug/1503663/comments/10

--
Best regards,
Oleg Gelbukh

On Tue, Dec 15, 2015 at 8:58 PM Dmitry Klenov <dkle...@mirantis.com> wrote:

> Hi folks,
>
> I would propose to keep current versioning schema until fuel release
> schedule is fully aligned with OpenStack releases. AFAIK it is expected to
> happen since 9.0. After it we can switch to OpenStack version names.
>
> BR,
> Dmitry.
>
> On Tue, Dec 15, 2015 at 8:41 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
> wrote:
>
>> Folks,
>>
>> I want to bring this up again. There were no progress since last
>> Oleg's mail, and we must decide. It's good that we still have
>> "2015.1.0-8.0" version while OpenStack uses "Liberty" name for
>> versions.
>>
>> Let's decide which name to use, file a bug and finally resolve it.
>>
>> - Igor
>>
>> On Thu, Oct 22, 2015 at 10:23 PM, Oleg Gelbukh <ogelb...@mirantis.com>
>> wrote:
>> > Igor, it is interesting that you mention backward compatibility in this
>> > context.
>> >
>> > I can see lots of code in Nailgun that checks for release version to
>> > enable/disable features that were added or removed more than 2 releases
>> > before [1] [2] [3] (there's a lot more).
>> >
>> > What should we do about that code? I believe we could 'safely' delete
>> it. It
>> > will make our code base much more compact and supportable without even
>> > decoupling serializers, etc. Is my assumption correct, or I just missing
>> > something?
>> >
>> > This will also help to switch to another scheme of versioning of
>> releases,
>> > since there will be much less places where those version scheme is
>> > hardcoded.
>> >
>> > [1]
>> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/release.py#L142-L145
>> > [2]
>> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L554-L555
>> > [3]
>> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/serializers/node.py#L124-L126
>> >
>> > --
>> > Best regards,
>> > Oleg Gelbukh
>> >
>> > On Mon, Oct 19, 2015 at 6:34 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> > wrote:
>> >>
>> >> Oleg,
>> >>
>> >> I think we can remove this function for new releases and keep them
>> >> only for backward compatibility with previous ones. Why not? If
>> >> there's a way to do things better let's do them better. :)
>> >>
>> >> On Sat, Oct 17, 2015 at 11:50 PM, Oleg Gelbukh <ogelb...@mirantis.com>
>> >> wrote:
>> >> > In short, because of this:
>> >> >
>> >> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/db/sqlalchemy/models/release.py#L74-L99
>> >> >
>> >> > Unless we use dashed 2-component version where OpenStack version
>> comes
>> >> > first, followed by version of Fuel, this will break creation of a
>> >> > cluster
>> >> > with given release.
>> >> >
>> >> > -Oleg
>> >> >
>> >> > On Sat, Oct 17, 2015 at 10:24 PM, Sergii Golovatiuk
>> >> > <sgolovat...@mirantis.com> wrote:
>> >> >>
>> >> >> Why can't we use 'liberty' without 8.0?
>> >> >>
>> >> >> On Sat, 17 Oct 2015 at 19:33, Oleg Gelbukh <ogelb...@mirantis.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> After closer look, the only viable option in closer term seems to
>> be
>> >> >>> 'liberty-8.0' version. It does not to break comparisons that exist
>> in
>> >> >>> the
>> >> >>> code and allows for smooth transition.
>> >> >

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-14 Thread Oleg Gelbukh
It's important to note that given the change in the upgrade method, there
will be no actual downgrade of the package, since Fuel 8.0 Admin Node will
be installed on a clean system. So, from the upgrade standpoint I see no
obstacles to have 9.2 in Fuel 8.0. I also greet any chance to reduce the
number of packages maintained in-house.

Depending on native packages is also important in the light of the
initiative to separate deployment of Fuel from installation of operating
system [1].

[1]
https://blueprints.launchpad.net/fuel/+spec/separate-fuel-node-provisioning

--
Best regards,
Oleg Gelbukh

On Mon, Dec 14, 2015 at 10:50 PM, Sergii Golovatiuk <
sgolovat...@mirantis.com> wrote:

> Hi,
>
> If we can stick with upstream PostgresSQL that would be really nice.
> Otherwise security updates and regular package update will be a burden of
> package maintainers. Ideally we should have as less forked packages as
> possible.
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Mon, Dec 14, 2015 at 5:47 AM, Aleksandr Didenko <adide...@mirantis.com>
> wrote:
>
>> Hi,
>>
>> > Downgrading for no reason could bring us to big trouble and bad user
>> experience
>>
>> +1 to this. Let's keep PostgreSQL 9.3.
>>
>> Regards,
>> Alex
>>
>> On Mon, Dec 14, 2015 at 2:04 PM, Artem Silenkov <asilen...@mirantis.com>
>> wrote:
>>
>>> Hello!
>>>
>>> Vote for update.
>>>
>>> 1. We have already shipped 9.3 in fuel-7.0. Downgrading such complicated
>>> package without any reason is not good thing at all. User experience could
>>> suffer a lot.
>>> 2. The next reason is tests. We have tested only 9.3, 9.2 was not tested
>>> at all. I'm sure we could bring serious regressions by downgrading,
>>> 3. Postgres-9.3 is not custom. It was taken from KOJI packages and
>>> backported without any modification. It means that this package is
>>> officially tested and supported by Fedora, which is good.
>>> 4. One shipped package more is not a huge burden for us. It was
>>> officially backported from official sources, tested and suits our need
>>> perfectly. Why do we need to play such dangerous games downgrading for no
>>> reasons?
>>>
>>> Let me notice that all packages are maintained by mos-packaging team now
>>> And we are perfectly ok with postgres-9.3.
>>>
>>> Downgrading for no reason could bring us to big trouble and bad user
>>> experience.
>>>
>>> Regards,
>>> Artem Silenkov
>>> ---
>>> MOs-Packaging
>>>
>>> On Mon, Dec 14, 2015 at 3:41 PM, Bartłomiej Piotrowski <
>>> bpiotrow...@mirantis.com> wrote:
>>>
>>>> On 2015-12-14 13:12, Igor Kalnitsky wrote:
>>>> > My opinion here is that I don't like that we're going to build and
>>>> > maintain one more custom package (just take a look at this patch [4]
>>>> > if you don't believe me), but I'd like to hear more opinion here.
>>>> >
>>>> > Thanks,
>>>> > Igor
>>>> >
>>>> > [1] https://bugs.launchpad.net/fuel/+bug/1523544
>>>> > [2] https://review.openstack.org/#/c/249656/
>>>> > [3] http://goo.gl/forms/Hk1xolKVP0
>>>> > [4] https://review.fuel-infra.org/#/c/14623/
>>>> >
>>>> >
>>>> __
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>>
>>>> I also think we should stay with what CentOS provides. Increasing
>>>> maintenance burden for something that can be implemented without bells
>>>> and whistles sounds like a no-go.
>>>>
>>>> Bartłomiej
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for u

Re: [openstack-dev] [Fuel] Configuration management for Fuel 7.0

2015-12-11 Thread Oleg Gelbukh
Roman,

Changing arbitrary parameters supported by respective Puppet manifests for
OpenStack services is implemented in this blueprint [1]. It is being landed
in release 8.0.

[1] https://blueprints.launchpad.net/fuel/+spec/openstack-config-change

--
Best regards,
Oleg Gelbukh

On Thu, Dec 3, 2015 at 5:28 PM, Roman Sokolkov <rsokol...@mirantis.com>
wrote:

> Folks,
>
> little bit more research done in regards #2 usability.
>
> I've selected 13 real-world tasks from customer (i.e. update flag X in
> nova.conf):
> - 6/13 require fuel-library patching (or is #2 unusable)
> - 3/13 are OK and can be done with #2
> - 4/13 can be done with some limitations.
>
> If needed i'll provide details.
>
> Rough statistics is that *only ~20-25% of use cases can be done with #2*.
>
> Let me give a very popular use case that will fail with #2. Assume we'r
> executing whole task graph every two hours.
> We want to change nova.conf "DEFAULT/amqp_durable_queues" from False to
> True.
>
> There is no parameter in hiera for "amqp_durable_queues". We have two
> solutions here (both are bad):
> 1) Redefine "DEFAULT/amqp_durable_queues" = True in plugin task. What will
> happen on the node. amqp_durable_queues will continue changing value
> between True and False on every execution. We shouldn't do it this way.
> 2) Patch fuel-library. Value for amqp_durable_queues should be taken from
> hiera. This is also one way ticket.
>
> Thanks
>
>
>
>
>
> On Thu, Dec 3, 2015 at 11:28 AM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
>
>> Roman,
>>
>> Thank you. This is great research.
>>
>> Could we have a conversation to discuss this? I'm especially interested
>> in idempotency problems of the fuel-library modules and the common way to
>> provide serialised data to the deployment.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>> Mirantis Inc
>>
>>
>> On Tue, Dec 1, 2015 at 6:38 PM, Roman Sokolkov <rsokol...@mirantis.com>
>> wrote:
>>
>>> Hello, folks.
>>>
>>> We need any kind of CM for Fuel 7.0. Otherwise new project with 800+
>>> nodes
>>> will be near impossible to support. Customer always wants to change
>>> something.
>>>
>>> In our opinion, there are two major approaches for CM:
>>>
>>> #1 Independent CM (Puppet master, Chef, Ansible, whatever)
>>> #2 Fuel-based CM
>>>
>>> Solution for #2
>>> --
>>>
>>> Fuel has all info about configuration. So we've tried to
>>> unlock "Settings" [0] and push "deploy" button.
>>>
>>> Major findings:
>>>
>>> * Task idem-potency. Looks like most of the tasks are idempotent.
>>> We've skipped 3 tasks on controller and were able to get NO downtime
>>> for Horizon and "nova list". BTW deeper QA required.
>>>
>>> * Standard changes. Operator can change parameters via WebUI, CLI or API.
>>> For example, i was able to deploy Sahara. Unfortunately there is not
>>> foolproof.
>>> I mean some changes can lead to broken cloud...
>>>
>>> * Non-standard changes. Any other changes can be done with plugins.
>>> We can modify plugin tasks and scripts (all except UI flags). And then
>>> just
>>> do "--update" + "--sync". BTW, we can change UI for particular env via
>>> API
>>> by modifying "clusters/X/attributes".
>>>
>>> Conclusion
>>> --
>>>
>>> - This works (We have service under cron that runs tasks) [1]
>>> - NOT ready for production (in current state)
>>> - This requires much deeper testing
>>>
>>>
>>> I want to hear thoughts about approach above?
>>> What is the current status/plans for CM? I saw this discussion [2]
>>>
>>> References
>>> --
>>>
>>> [0]
>>> https://github.com/rsokolkov/fuel-web/commit/366daaa2eb874c8e54c2d39be475223937cd317d
>>> [1]
>>> https://docs.google.com/presentation/d/12kkh1hu4ZrY9S6XXsY_HWaesFwESfxbl5czUwde8isM/edit#slide=id.p
>>> [2] https://etherpad.openstack.org/p/lcm-use-cases
>>>
>>> --
>>> Roman Sokolkov,
>>> Deployment Engineer,
>>> Mirantis, Inc.
>>> Skype rsokolkov,
>>> rsokol...@mirantis.com
>>>
>>>
>>> __
>>> OpenStac

Re: [openstack-dev] [Fuel] Separate master node provisioning and deployment

2015-12-11 Thread Oleg Gelbukh
For the package-based deployment, we need to get rid of 'deployment script'
whatsoever. All configuration stuff should be done in package specs, or by
the user later on (maybe via some fuelmenu-like lightweight UI, or via
WebUI).

Thus, fuel package must install everything that is required for running
base Fuel as it's dependencies (or dependencies of it's dependencies, as it
could be more complicated with cross-deps between our components).

--
Best regards,
Oleg Gelbukh

On Fri, Dec 11, 2015 at 10:45 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> At the moment part of the Fuel master deployment logic is located in ISO
> kickstart file, which is bad. We'd better carefully split provisioning and
> deployment stages so as to install base operating system during
> provisioning stage and then everything else on the deployment stage. That
> would make it possible to deploy Fuel on pre-installed vanilla Centos 7.
> Besides, if we have deb packages for all Fuel components it will be easy to
> support Fuel deployment on pre-installed Ubuntu and Debian.
>
> We (Fuel build team) are going to do this ASAP [0]. Right now we are on
> the stage of writing design spec for the change [1].
>
> Open questions are:
> 1) Should fuel package have all other fuel packages like nailgun, astute,
> etc. as its dependencies? Or maybe it should install only puppet modules
> and deployment script that then could be used to deploy everything else?
>
> 2) bootstrap_admin_node.sh runs fuelmenu and then puppet to deploy Fuel
> components. Should we run this script as post-install script or maybe we
> should leave this up to a user to run this script later when fuel package
> is already installed?
>
> Anyway, the final goal is to make ISO just one of possible delivery
> schemes. Primary delivery approach should be rpm/deb repo, not ISO.
>
> [0]
> https://blueprints.launchpad.net/fuel/+spec/separate-fuel-node-provisioning
> [1] https://review.openstack.org/#/c/254270/
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins]Security problem in Fuel 7.0

2015-12-07 Thread Oleg Gelbukh
+1 to Eugene here. Eventually we will need to more strictly define Plugins
framework and SDK and limit possible actions to the set of supported ones.
This is required not only for security and/or stability reasons, but for
upgrade purposes as well.

On the other hand, we need to retain certain flexibility of deployment.
That could be achieved by turning the 'core' components into pluggable
options, and reducing the 'core'  to the set of replaceable plugins shipped
with the reference architecture. This will eliminate the need for many of
the hack used nowadays in plugins to override default behaviours.

--
Best regards,
Oleg Gelbukh

On Mon, Dec 7, 2015 at 9:29 PM, Eugene Korekin <ekore...@mirantis.com>
wrote:

> Stas,
>
> I fear that often even developer of a code cannot verify his own code
> completely, let alone some third-party validation teams. Does the ability
> to strictly limit plugin actions by the list of intended environments looks
> nonviable to you?
>
>
>
> On 07.12.2015 20:38, Stanislaw Bogatkin wrote:
>
> +1 to Andrew. Plugins created for run some code and plugin verification is
> the source of trust there.
>
> On Mon, Dec 7, 2015 at 8:19 PM, Andrew Woodward <xar...@gmail.com> wrote:
>
>> I'd have to say that this is expected behavior. I'm not sure what you
>> would hope to prohibit when these kinds of things are necessary for the
>> deployment. We also can't prohibit this from being done in a plugin, this
>> is what the plugin verification is supposed to help combat. If you just go
>> download a random puppet manifest // script // etc... from the internet,
>> how do you ensure that it didn't install a root-kit.
>>
>> On Mon, Dec 7, 2015 at 9:14 AM Eugene Korekin <ekore...@mirantis.com>
>> wrote:
>>
>>> As far as I know this feature is planned for the next releases.
>>>
>>> But I think the main problem is: it's not obvious that just by
>>> installing a plugin, even without enabling the plugin in Fuel user could
>>> break or somehow alter already existing environments.  It could be done by
>>> malicious attacker who could compromise plugin or just unintentionally with
>>> some bug in the plugin code.
>>>
>>> Unfortunately, by installing some plugin a user jeopardizes his existing
>>> environments. And I think we should at least document these risks.
>>>
>>>
>>> On 07.12.2015 19:52, Javeria Khan wrote:
>>>
>>> My two cents. It would be useful to have a role that could execute on
>>> the Fuel Master host itself rather than a container.
>>>
>>> --
>>> Javeria
>>> On Dec 7, 2015 9:49 PM, "Roman Prykhodchenko" < <m...@romcheg.me>
>>> m...@romcheg.me> wrote:
>>>
>>>> Alexey,
>>>>
>>>> thank you for bringing this up. IMO discussing security problems is
>>>> better to be done in a special kind of Launchpad bugs.
>>>>
>>>> - romcheg
>>>>
>>>>
>>>> > 7 груд. 2015 р. о 17:36 Alexey Elagin <aela...@mirantis.com>
>>>> написав(ла):
>>>> >
>>>> > Hello all,
>>>> >
>>>> > We have a security problem in Fuel 7.0. It's related to plugin
>>>> > development and allows to execute code in mcollective docker container
>>>> > on Fuel master node. Any fuel plugin may contains a yaml file with
>>>> > deployment tasks (tasks.yaml, deployment_tasks.yaml etc) and there is
>>>> > an ability to run some code on node with role "master". It's also
>>>> > possible to connect to any target node via ssh without a password from
>>>> > within the container.
>>>> >
>>>> > As i understood, it was made to simplify some deployment cases. I see
>>>> > some steps for resolving this situation:
>>>> > 1. Fuel team should disallow
>>>> > execution of any puppet manifests or bash code on nodes with master
>>>> > role.
>>>> > 2. Append the Fuel documentation. Notify users about this
>>>> > security issue.
>>>> >
>>>> > What do you think about it? What deployment cases which require
>>>> > execution of code on role "master" do you know?
>>>> >
>>>> > --
>>>> > Best regards,
>>>> > Alexey
>>>> > Deployment Engineer
>>>> > Mirantis, Inc
>>>> > Cell: +7 (968) 880 2288 <%2B7%20%28968%29%20880%202288>
>>>> > Skype: 

Re: [openstack-dev] [Fuel] Configuration management for Fuel 7.0

2015-12-03 Thread Oleg Gelbukh
Roman,

Thank you. This is great research.

Could we have a conversation to discuss this? I'm especially interested in
idempotency problems of the fuel-library modules and the common way to
provide serialised data to the deployment.

--
Best regards,
Oleg Gelbukh
Mirantis Inc


On Tue, Dec 1, 2015 at 6:38 PM, Roman Sokolkov <rsokol...@mirantis.com>
wrote:

> Hello, folks.
>
> We need any kind of CM for Fuel 7.0. Otherwise new project with 800+ nodes
> will be near impossible to support. Customer always wants to change
> something.
>
> In our opinion, there are two major approaches for CM:
>
> #1 Independent CM (Puppet master, Chef, Ansible, whatever)
> #2 Fuel-based CM
>
> Solution for #2
> --
>
> Fuel has all info about configuration. So we've tried to
> unlock "Settings" [0] and push "deploy" button.
>
> Major findings:
>
> * Task idem-potency. Looks like most of the tasks are idempotent.
> We've skipped 3 tasks on controller and were able to get NO downtime
> for Horizon and "nova list". BTW deeper QA required.
>
> * Standard changes. Operator can change parameters via WebUI, CLI or API.
> For example, i was able to deploy Sahara. Unfortunately there is not
> foolproof.
> I mean some changes can lead to broken cloud...
>
> * Non-standard changes. Any other changes can be done with plugins.
> We can modify plugin tasks and scripts (all except UI flags). And then just
> do "--update" + "--sync". BTW, we can change UI for particular env via API
> by modifying "clusters/X/attributes".
>
> Conclusion
> --
>
> - This works (We have service under cron that runs tasks) [1]
> - NOT ready for production (in current state)
> - This requires much deeper testing
>
>
> I want to hear thoughts about approach above?
> What is the current status/plans for CM? I saw this discussion [2]
>
> References
> --
>
> [0]
> https://github.com/rsokolkov/fuel-web/commit/366daaa2eb874c8e54c2d39be475223937cd317d
> [1]
> https://docs.google.com/presentation/d/12kkh1hu4ZrY9S6XXsY_HWaesFwESfxbl5czUwde8isM/edit#slide=id.p
> [2] https://etherpad.openstack.org/p/lcm-use-cases
>
> --
> Roman Sokolkov,
> Deployment Engineer,
> Mirantis, Inc.
> Skype rsokolkov,
> rsokol...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS-7 Transition Plan

2015-11-24 Thread Oleg Gelbukh
That's good to know, thank you, Vladimir, Dmitry.

--
Best regards,
Oleg Gelbukh

On Tue, Nov 24, 2015 at 3:10 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> In fact, we (I and Dmitry) are on the same page of how to merge these two
> features (Centos7 and Docker removal). We agreed that Dmitry's feature is
> much more complicated and of higher priority. So, Centos 7 should be merged
> first and then I'll rebase my patches (mostly supervisor -> systemd).
>
>
>
>
>
> Vladimir Kozhukalov
>
> On Tue, Nov 24, 2015 at 1:57 AM, Igor Kalnitsky <ikalnit...@mirantis.com>
> wrote:
>
>> Hey Dmitry,
>>
>> Thank you for your effort. I believe it's a huge step forward that
>> opens number of possibilities.
>>
>> > Every container runs systemd as PID 1 process instead of
>> > supervisord or application / daemon.
>>
>> Taking into account that we're going to drop Docker containers, I
>> think it was unnecessary complication of your work.
>>
>> Please sync-up with Vladimir Kozhukalov, he's working on getting rid
>> of containers.
>>
>> > Every service inside a container is a systemd unit. Container build
>> > procedure was modified, scripts setup.sh and start.sh were introduced
>> > to be running during building and configuring phases respectively.
>>
>> Ditto. :)
>>
>> Thanks,
>> Igor
>>
>> P.S: I wrote the mail and forgot to press "send" button. It looks like
>> Oleg is already pointed out that I wanted to.
>>
>> On Mon, Nov 23, 2015 at 2:37 PM, Oleg Gelbukh <ogelb...@mirantis.com>
>> wrote:
>> > Please, take into account the plan to drop the containerization of Fuel
>> > services:
>> >
>> > https://review.openstack.org/#/c/248814/
>> >
>> > --
>> > Best regards,
>> > Oleg Gelbukh
>> >
>> > On Tue, Nov 24, 2015 at 12:25 AM, Dmitry Teselkin <
>> dtesel...@mirantis.com>
>> > wrote:
>> >>
>> >> Hello,
>> >>
>> >> We've been working for some time on bringing CentOS-7 to master node,
>> >> and now is the time to share and discuss the transition plan.
>> >>
>> >> First of all, what have been changed:
>> >> * Master node itself runs on CentOS-7. Since all the containers share
>> >>   the same repo as master node they all have been migrated to CentOS-7
>> >>   too. Every container runs systemd as PID 1 process instead of
>> >>   supervisord or application / daemon.
>> >> * Every service inside a container is a systemd unit. Container build
>> >>   procedure was modified, scripts setup.sh and start.sh were introduced
>> >>   to be running during building and configuring phases respectively.
>> >>   The main reason for this was the fact that many puppet manifests use
>> >>   service management commands that require systemd daemon running. This
>> >>   also allowed to simplify Dockerfiles by removing all actions to
>> >>   setup.sh file.
>> >> * We managed to find some bugs in various parts that were fixed too.
>> >> * Bootstrap image is also CentOS-7 based. It was updated to better
>> >>   support it - some services converted to systemd units and fixes to
>> >>   support new network naming schema were made.
>> >> * ISO build procedure was updated to reflect changes in CentOS-7
>> >>   distribution and to support changes in docker build procedure.
>> >> * Many applications was updated (puppet, docker, openstack
>> >>   components).
>> >> * Docker containers moved to LVM volume to improve performance and get
>> >>   rid of annoying warning messages during master node deployment.
>> >>   bootstrap_admin_node.sh script was updated to fix some deployment
>> >>   issues (e.g. dracut behavior when there are multiple network
>> >>   interfaces available) and simplified by removing outdated
>> >>   functionality. It was also converted to a "run once" logon script
>> >>   instead of being run as a service, primarily because of a way it's
>> >>   used.
>> >>
>> >> As you can see there are a lot of changes were made. Some of them might
>> >> be merged into current master if surrounded by conditionals to be
>> >> compatible with current master node, but some of them simply can't.
>> >>
>> >> To simplify the code review process we've splitted CRs that we were
>> >> using dur

Re: [openstack-dev] [Fuel] CentOS-7 Transition Plan

2015-11-23 Thread Oleg Gelbukh
Please, take into account the plan to drop the containerization of Fuel
services:

https://review.openstack.org/#/c/248814/

--
Best regards,
Oleg Gelbukh

On Tue, Nov 24, 2015 at 12:25 AM, Dmitry Teselkin <dtesel...@mirantis.com>
wrote:

> Hello,
>
> We've been working for some time on bringing CentOS-7 to master node,
> and now is the time to share and discuss the transition plan.
>
> First of all, what have been changed:
> * Master node itself runs on CentOS-7. Since all the containers share
>   the same repo as master node they all have been migrated to CentOS-7
>   too. Every container runs systemd as PID 1 process instead of
>   supervisord or application / daemon.
> * Every service inside a container is a systemd unit. Container build
>   procedure was modified, scripts setup.sh and start.sh were introduced
>   to be running during building and configuring phases respectively.
>   The main reason for this was the fact that many puppet manifests use
>   service management commands that require systemd daemon running. This
>   also allowed to simplify Dockerfiles by removing all actions to
>   setup.sh file.
> * We managed to find some bugs in various parts that were fixed too.
> * Bootstrap image is also CentOS-7 based. It was updated to better
>   support it - some services converted to systemd units and fixes to
>   support new network naming schema were made.
> * ISO build procedure was updated to reflect changes in CentOS-7
>   distribution and to support changes in docker build procedure.
> * Many applications was updated (puppet, docker, openstack
>   components).
> * Docker containers moved to LVM volume to improve performance and get
>   rid of annoying warning messages during master node deployment.
>   bootstrap_admin_node.sh script was updated to fix some deployment
>   issues (e.g. dracut behavior when there are multiple network
>   interfaces available) and simplified by removing outdated
>   functionality. It was also converted to a "run once" logon script
>   instead of being run as a service, primarily because of a way it's
>   used.
>
> As you can see there are a lot of changes were made. Some of them might
> be merged into current master if surrounded by conditionals to be
> compatible with current master node, but some of them simply can't.
>
> To simplify the code review process we've splitted CRs that we were
> using during active development to  a set of smaller CRs and assigned
> the same topic centos7-master-nod to all of them [0].
>
> So, here is the plan:
> * We will put a mark 'Breaks' in every commit message indicating if the
>   CR is compatible with current master node. E.g. 'Breaks: centos-6'
>   means it can't be merged without breaking things, but 'Breaks:
>   nothing' means it OK to merge.
> * All the CRs should be reviewed, regardless of their 'breaks' label,
>   and voted. We will not merge breaking CRs accidentally, only those
>   that are safe will be merged.
> * While code review is in progress we will work on passing our custom
>   ISO BVT and scale lab tests. When these tests pass - we will run
>   swarm on top of this custom ISO.
> * In the meantime our QA infrastructure will be updated to support
>   CentOS-7 master node - it should be compatible in most cases,
>   however, there are some places that are not. We plan to make changes
>   compatible with current ISO.
> * As soon as ISO becomes good enough we should take a deep breath and
>   turn the switch by merging all the changes that will bring CentOS-7
>   to master branch (and break CentOS-6 version). This step requires
>   all repositories involved to be frozen for small period of time, and
>   that's why a merge freeze might be called. Immediately after all the
>   changes are merged we will build new ISO and run reduced set of swarm
>   tests. If the results are acceptable we will go on with CentOS-7. If
>   not - we will revert breaking changes.
>
>
> [0]
> https://review.openstack.org/#/q/status:open+topic:centos7-master-node,n,z
>
>
> --
> Thanks,
> Dmitry Teselkin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Oleg Gelbukh
With CentOS7 we will have python2.7 at Fuel Admin node as a default
version, I believe.

--
Best regards,
Oleg Gelbukh,
Principal Engineer
Mirantis

On Fri, Nov 20, 2015 at 6:27 AM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi Andrey,
>
> As far as I remember from the last usage of fuel master node, there was
>> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
>> hard to launch some application on fuel node without docker (image with
>> py27/py3). Are you planning to provide py27 at least or my note is outdated
>> and I can already use py27 from the box?
>
> We can install docker on master node anyway to run Rally / Tempest or
> other test suites and scripts from master node with Python 2.7 or something
> also.
>
> On Fri, Nov 20, 2015 at 5:20 PM, Andrey Kurilin <akuri...@mirantis.com>
> wrote:
>
>> Hi!
>> I'm not fuel developer, so opinion below is based on user-view.
>> As far as I remember from the last usage of fuel master node, there was
>> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
>> hard to launch some application on fuel node without docker (image with
>> py27/py3). Are you planning to provide py27 at least or my note is outdated
>> and I can already use py27 from the box?
>>
>> On Thu, Nov 19, 2015 at 4:59 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> As might remember, we introduced Docker containers on the master node a
>>> while ago when we implemented first version of Fuel upgrade feature. The
>>> motivation behind was to make it possible to rollback upgrade process if
>>> something goes wrong.
>>>
>>> Now we are at the point where we can not use our tarball based upgrade
>>> approach any more and those patches that deprecate upgrade tarball has been
>>> already merged. Although it is a matter of a separate discussion, it seems
>>> that upgrade process rather should be based on kind of backup and restore
>>> procedure. We can backup Fuel data on an external media, then we can
>>> install new version of Fuel from scratch and then it is assumed backed up
>>> Fuel data can be applied over this new Fuel instance. The procedure itself
>>> is under active development, but it is clear that rollback in this case
>>> would be nothing more than just restoring from the previously backed up
>>> data.
>>>
>>> As for Docker containers, still there are potential advantages of using
>>> them on the Fuel master node, but our current implementation of the feature
>>> seems not mature enough to make us benefit from the containerization.
>>>
>>> At the same time there are some disadvantages like
>>>
>>>- it is tricky to get logs and other information (for example, rpm
>>>-qa) for a service like shotgun which is run inside one of containers.
>>>- it is specific UX when you first need to run dockerctl shell
>>>{container_name} and then you are able to debug something.
>>>- when building IBP image we mount directory from the host file
>>>system into mcollective container to make image build faster.
>>>- there are config files and some other files which should be shared
>>>among containers which introduces unnecessary complexity to the whole
>>>system.
>>>- our current delivery approach assumes we wrap into rpm/deb
>>>packages every single piece of the Fuel system. Docker images are not an
>>>exception. And as far as they depend on other rpm packages we forced to
>>>build docker-images rpm package using kind of specific build flow. 
>>> Besides
>>>this package is quite big (300M).
>>>- I'd like it to be possible to install Fuel not from ISO but from
>>>RPM repo on any rpm based distribution. But it is double work to support
>>>both Docker based and package based approach.
>>>
>>> Probably some of you can give other examples. Anyway, the idea is to get
>>> rid of Docker containers on the master node and switch to plane package
>>> based approach that we used before.
>>>
>>> As far as there is nothing new here, we just need to use our old site.pp
>>> (with minimal modifications), it looks like it is possible to implement
>>> this during 8.0 release cycle. If there are no principal objections, please
>>> give me a chance to do this ASAP (during 8.0), I know it is a huge risk for
>>> the release, but still I think I can do thi

Re: [openstack-dev] [Fuel] Approved but not implemented specs

2015-11-20 Thread Oleg Gelbukh
It's a good point.

I think it could even be done automatically: once spec freeze is in place,
run an infra script and update all CRs still in review with specs targeted
to current (and previous) releases by moving them to next release's
directory.

-Oleg

On Fri, Nov 20, 2015 at 3:35 PM, Igor Kalnitsky 
wrote:

> Hey Fuelers,
>
> Today I noticed that some of Fuel specs have been merged for 7.0 while
> the features themselves weren't landed. It's kind confusing since it
> seems like the feature was implemented in 7.0 while it's not.
>
> What do you think guys about moving such specs into 8.0 folder? I
> believe it's a way to better understand what we're doing now, and what
> was done previously.
>
> Thanks,
> Igor
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-18 Thread Oleg Gelbukh
We are going to address the problem using the following approach:

1. Backup settings from the Fuel 7.0 Master node, including configuration
files for bootstrap script, data from state database, security keys and
certs, etc
2. Restore settings on top of freshly installed Fuel 8.0 Master.
3. Upload database dump into the DB of Fuel 8.0 Master.
3. Perform required actions to apply migrations to Fuel state DB.

In this case, rollback scenario is either revert to Fuel 7.0 Master (if it
wasn't reinstalled), or apply the same procedure to fresh Fuel 7.0 Master
installation.

This scenario introduces different upgrade workflow vs what upgrade tarball
used. We will update user documentation with the new workload. Operators
will have to consider changes to their processes in accordance with the new
workflow.

I will update this list once we have some progress on this task. You can
also track it in the following blueprint:

https://blueprints.launchpad.net/fuel/+spec/upgrade-master-node-centos7

--
Best regards,
Oleg Gelbukh

On Tue, Nov 10, 2015 at 8:52 AM, Vladimir Kuklin <vkuk...@mirantis.com>
wrote:

> Evgeniy
>
> I am not sure you addressed me, but, anyway, - yes, we will have a
> situation with old containers on new host node. This will be identical to
> old host node from database migration point of view.
>
> On Tue, Nov 10, 2015 at 7:38 PM, Evgeniy L <e...@mirantis.com> wrote:
>
>> Hi Vladimir,
>>
>> Just to make sure that we are on the same page. We'll have to use upgrade
>> script anyway, since you will need to run database migration and register
>> new releases.
>>
>> Thanks,
>>
>> On Monday, 9 November 2015, Vladimir Kozhukalov <vkozhuka...@mirantis.com>
>> wrote:
>>
>>> Looks like most people thing that building backup/re-install approach is
>>> more viable. So, we certainly need to invent completely new upgrade from
>>> and thus my suggestion is disable building/testing upgrade tarball right
>>> now, because anyway it makes no sense.
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Fri, Nov 6, 2015 at 8:21 PM, Vladimir Kuklin <vkuk...@mirantis.com>
>>> wrote:
>>>
>>>> Just my 2 cents here - let's do docker backup and roll it up onto brand
>>>> new Fuel 8 node.
>>>>
>>>> On Fri, Nov 6, 2015 at 7:54 PM, Oleg Gelbukh <ogelb...@mirantis.com>
>>>> wrote:
>>>>
>>>>> Matt,
>>>>>
>>>>> You are talking about this part of Operations guide [1], or you mean
>>>>> something else?
>>>>>
>>>>> If yes, then we still need to extract data from backup containers. I'd
>>>>> prefer backup of DB in simple plain text file, since our DBs are not that
>>>>> big.
>>>>>
>>>>> [1]
>>>>> https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#howto-backup-and-restore-fuel-master
>>>>>
>>>>> --
>>>>> Best regards,
>>>>> Oleg Gelbukh
>>>>>
>>>>> On Fri, Nov 6, 2015 at 6:03 PM, Matthew Mosesohn <
>>>>> mmoses...@mirantis.com> wrote:
>>>>>
>>>>>> Oleg,
>>>>>>
>>>>>> All the volatile information, including a DB dump, are contained in
>>>>>> the small Fuel Master backup. There should be no information lost unless
>>>>>> there was manual customization done inside the containers (such as puppet
>>>>>> manifest changes). There shouldn't be a need to back up the entire
>>>>>> containers.
>>>>>>
>>>>>> The information we would lose would include the IP configuration
>>>>>> interfaces besides the one used for the Fuel PXE network and any custom
>>>>>> configuration done on the Fuel Master.
>>>>>>
>>>>>> I want #1 to work smoothly, but #2 should also be a safe route.
>>>>>>
>>>>>> On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh <ogelb...@mirantis.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Evgeniy,
>>>>>>>
>>>>>>> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L <e...@mirantis.com> wrote:
>>>>>>>
>>>>>>>> Also we should decide when to run containers
>>>>>>>> upgrade + host upgrade? Before or after new CentOS is installed?
>>>>>>>> Probably
>>>>>>>> it should be done before we run backup, in or

Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Hi

We should think about separating packages for master node and openstack. I
> guess we should use 2 repository:
> 1. MOS - repository for OpenStack related nodes
> 2. MasterNode - repository for packages that are used for master node only.
>
>
At the moment, this is pretty simple as we only support Ubuntu as target
node system as of 7.0 and 8.0, and our Master node runs on CentOS. Thus,
our CentOS repo is for Fuel node, and Ubuntu repo is for OpenStack.


> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and it
>> is not enough to just do things which we usually did during upgrades. Now
>> there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>
> +1 for 2. We cannot guarantee that #1 will work smoothly. Also, there is
> some technical dept we cannot solve with #1 (i.e. - Docker device mapper).
> Also, the customer might have environments running on CentOS 6 so
> supporting all scenarios is quite hard. IF we do this we can redesign
> docker related part so we'll have huge profit later on.
>
>
In Upgrade team, we researched these 2 options. Option #1 allows us to keep
procedure close to what we had in previous versions, but it won't be
automatic as there are too many changes in our flavor of CentOS 6.6. Option
#2, on the other hand, will require developing essentially a new workflow:
1. backup the DB and settings,
2. prepare custom config for bootstrap_master_node script (to retain IP
addressing),
3. reinstall Fuel node with 8.0,
4. upload and upgrade DB,
5. restore keystone/db credentials

This sequence of steps is high level, of course, and might change in the
development. Its additional value that backup/restore parts of it could be
used separately to create backups of the Fuel node.

Our current plan is to pursue option #2 in the following 3 weeks. I will
keep this list updated on our progress as soon as we have any.

--
Best regards,
Oleg Gelbukh


> A a company we will help the clients who might want to upgrade from
> 5.1-7.0 to 8.0, but that will include analysing environment/plugins and
> making personal scenario for upgrade. It might be 'fuel-octane' to migrate
> workload to a new cloud or some script/documentation to perform upgrade.
>
>
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>
> +1.
>
>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>
> +2. That will allow us to:
> 1. Reduce ISO size
> 2. Increase ISO compilation by including -j8
> 3. Speed up CI
>
>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Evgeniy,

On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L <e...@mirantis.com> wrote:

> Also we should decide when to run containers
> upgrade + host upgrade? Before or after new CentOS is installed? Probably
> it should be done before we run backup, in order to get the latest scripts
> for
> backup/restore actions.
>

We're working to determine if we need to backup/upgrade containers at all.
My expectation is that we should be OK with just backup of DB, IP addresses
settings from astute.yaml for the master node, and credentials from
configuration files for the services.

--
Best regards,
Oleg Gelbukh


>
> Thanks,
>
> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
>> it includes the following:
>>
>> * RPM repository (upstream + mos)
>> * DEB repository (mos)
>> * openstack.yaml
>> * version.yaml
>> * upgrade script itself (+ virtualenv)
>>
>> Apart from upgrading docker containers this upgrade script makes copies
>> of the RPM/DEB repositories and puts them on the master node naming these
>> repository directories depending on what is written in openstack.yaml and
>> version.yaml. My plan was something like:
>>
>> 1) deprecate version.yaml (move all fields from there to various places)
>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>> 3) do not put new repos on the master node (instead we should use online
>> repos or use fuel-createmirror to make local mirrors)
>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>
>> Then UX was supposed to be roughly like:
>>
>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>> 2) yum install fuel-upgrade
>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>> there should have not be parts coping RPM/DEB repos)
>>
>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>> it is not enough to just do things which we usually did during upgrades.
>> Now there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
On Fri, Nov 6, 2015 at 3:32 PM, Alexander Kostrikov <akostri...@mirantis.com
> wrote:

> Hi, Vladimir!
> I think that option (2) 'to backup the master node, then reinstall it
> from scratch and then apply backup' is a better way for upgrade.
> In that way we are concentrating on two problems in one feature:
> backups and upgrades.
>
That will ease development, testing and also reduce feature creep.
>

Alexander, +1 on this.

--
Best regards,
Oleg Gelbukh

>
> P.S.
> It is hard to refer to (2) because You have thee (2)-s.
>
> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
>> it includes the following:
>>
>> * RPM repository (upstream + mos)
>> * DEB repository (mos)
>> * openstack.yaml
>> * version.yaml
>> * upgrade script itself (+ virtualenv)
>>
>> Apart from upgrading docker containers this upgrade script makes copies
>> of the RPM/DEB repositories and puts them on the master node naming these
>> repository directories depending on what is written in openstack.yaml and
>> version.yaml. My plan was something like:
>>
>> 1) deprecate version.yaml (move all fields from there to various places)
>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>> 3) do not put new repos on the master node (instead we should use online
>> repos or use fuel-createmirror to make local mirrors)
>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>
>> Then UX was supposed to be roughly like:
>>
>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>> 2) yum install fuel-upgrade
>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>> there should have not be parts coping RPM/DEB repos)
>>
>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>> it is not enough to just do things which we usually did during upgrades.
>> Now there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Kind Regards,
>
> Alexandr Kostrikov,
>
> Mirantis, Inc.
>
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>
> Skype: akostrikov_mirantis
>
> E-mail: akostri...@mirantis.com <elogut...@mirantis.com>
>
> *www.mirantis.com <http://www.mirantis.ru/>*
> *www.mirantis.ru <http://www.mirantis.ru/>*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Matt,

You are talking about this part of Operations guide [1], or you mean
something else?

If yes, then we still need to extract data from backup containers. I'd
prefer backup of DB in simple plain text file, since our DBs are not that
big.

[1]
https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#howto-backup-and-restore-fuel-master

--
Best regards,
Oleg Gelbukh

On Fri, Nov 6, 2015 at 6:03 PM, Matthew Mosesohn <mmoses...@mirantis.com>
wrote:

> Oleg,
>
> All the volatile information, including a DB dump, are contained in the
> small Fuel Master backup. There should be no information lost unless there
> was manual customization done inside the containers (such as puppet
> manifest changes). There shouldn't be a need to back up the entire
> containers.
>
> The information we would lose would include the IP configuration
> interfaces besides the one used for the Fuel PXE network and any custom
> configuration done on the Fuel Master.
>
> I want #1 to work smoothly, but #2 should also be a safe route.
>
> On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
>
>> Evgeniy,
>>
>> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L <e...@mirantis.com> wrote:
>>
>>> Also we should decide when to run containers
>>> upgrade + host upgrade? Before or after new CentOS is installed? Probably
>>> it should be done before we run backup, in order to get the latest
>>> scripts for
>>> backup/restore actions.
>>>
>>
>> We're working to determine if we need to backup/upgrade containers at
>> all. My expectation is that we should be OK with just backup of DB, IP
>> addresses settings from astute.yaml for the master node, and credentials
>> from configuration files for the services.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>>
>>>
>>> Thanks,
>>>
>>> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
>>>> Dear colleagues,
>>>>
>>>> At the moment I'm working on deprecating Fuel upgrade tarball.
>>>> Currently, it includes the following:
>>>>
>>>> * RPM repository (upstream + mos)
>>>> * DEB repository (mos)
>>>> * openstack.yaml
>>>> * version.yaml
>>>> * upgrade script itself (+ virtualenv)
>>>>
>>>> Apart from upgrading docker containers this upgrade script makes copies
>>>> of the RPM/DEB repositories and puts them on the master node naming these
>>>> repository directories depending on what is written in openstack.yaml and
>>>> version.yaml. My plan was something like:
>>>>
>>>> 1) deprecate version.yaml (move all fields from there to various places)
>>>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>>>> 3) do not put new repos on the master node (instead we should use
>>>> online repos or use fuel-createmirror to make local mirrors)
>>>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>>>
>>>> Then UX was supposed to be roughly like:
>>>>
>>>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>>>> 2) yum install fuel-upgrade
>>>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>>>> there should have not be parts coping RPM/DEB repos)
>>>>
>>>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>>>> it is not enough to just do things which we usually did during upgrades.
>>>> Now there are two ways to upgrade:
>>>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>>>> 2) to backup the master node, then reinstall it from scratch and then
>>>> apply backup
>>>>
>>>> Upgrade team is trying to understand which way is more appropriate.
>>>> Regarding to my tarball related activities, I'd say that this package based
>>>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>>>> Centos upgrade script as a first step for upgrade), but it definitely can
>>>> not be aligned with (2), because it assumes reinstalling the master node
>>>> from scratch.
>>>>
>>>> Right now, I'm finishing the work around deprecating version.yaml and
>>>> my further steps would be to modify fuel-upgrade script so it does not copy
>>>> RPM/DEB repos, but those steps make little sense taking into account Centos
>>>> 7 fe

Re: [openstack-dev] [Fuel][Fuel-Modularization] Proposal on Decoupling Serializers from Nailgun

2015-10-28 Thread Oleg Gelbukh
u're mixing responsibilities of orchestrator and
settings store. Orchestrator tells a component when to start, and it could
ask poll the settings store to determine if certain task has to be started.
Alternatively, the component itself might be notified by settings store
about changes and make decision for itself whether it should execute
something on such a change or not.

Settings store proposed shall provide API that must support:

1. CRUD configuration schema (view template) and dynamically create/update
2. CRUD views based on registered templates for the component (ideally
there should be 1 to 1 template/component relation)
3. automatic resolution and update of links between certain parameters
defined in templates
4. notification of components about changes in their respective views
triggered by external links

--
Best regards,
Oleg Gelbukh


>
>
> On Thu, Oct 22, 2015 at 1:25 PM, Dmitriy Shulyak <dshul...@mirantis.com>
> wrote:
>
>>
>> Hi Oleg,
>>
>> I want to mention that we are using similar approach for deployment
>> engine, the difference is that we are working not with components, but with
>> deployment objects (it could be resources or tasks).
>> Right now all the data should be provided by user, but we are going to
>> add concept of managed resource, so that resource will be able to request
>> data from 3rd party service before execution, or by notification, if it is
>> supported.
>> I think this is similar to what Vladimir describes.
>>
>> As for the components - i see how it can be useful, for example
>> provisioning service will require data from networking service, but i think
>> nailgun can act as router for such cases.
>> This way we will keep components simple and purely functional, and
>> nailgun will perform a role of a client which knows how to build
>> interaction between components.
>>
>> So, as a summary i think this is 2 different problems.
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-10-22 Thread Oleg Gelbukh
Igor, it is interesting that you mention backward compatibility in this
context.

I can see lots of code in Nailgun that checks for release version to
enable/disable features that were added or removed more than 2 releases
before [1] [2] [3] (there's a lot more).

What should we do about that code? I believe we could 'safely' delete it.
It will make our code base much more compact and supportable without even
decoupling serializers, etc. Is my assumption correct, or I just missing
something?

This will also help to switch to another scheme of versioning of releases,
since there will be much less places where those version scheme is
hardcoded.

[1]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/release.py#L142-L145
[2]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L554-L555
[3]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/serializers/node.py#L124-L126

--
Best regards,
Oleg Gelbukh

On Mon, Oct 19, 2015 at 6:34 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Oleg,
>
> I think we can remove this function for new releases and keep them
> only for backward compatibility with previous ones. Why not? If
> there's a way to do things better let's do them better. :)
>
> On Sat, Oct 17, 2015 at 11:50 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
> > In short, because of this:
> >
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/db/sqlalchemy/models/release.py#L74-L99
> >
> > Unless we use dashed 2-component version where OpenStack version comes
> > first, followed by version of Fuel, this will break creation of a cluster
> > with given release.
> >
> > -Oleg
> >
> > On Sat, Oct 17, 2015 at 10:24 PM, Sergii Golovatiuk
> > <sgolovat...@mirantis.com> wrote:
> >>
> >> Why can't we use 'liberty' without 8.0?
> >>
> >> On Sat, 17 Oct 2015 at 19:33, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
> >>>
> >>> After closer look, the only viable option in closer term seems to be
> >>> 'liberty-8.0' version. It does not to break comparisons that exist in
> the
> >>> code and allows for smooth transition.
> >>>
> >>> --
> >>> Best regards,
> >>> Oleg Gelbukh
> >>>
> >>> On Fri, Oct 16, 2015 at 5:35 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> >>> wrote:
> >>>>
> >>>> Oleg,
> >>>>
> >>>> Awesome! That's what I was looking for. :)
> >>>>
> >>>> - Igor
> >>>>
> >>>> On Fri, Oct 16, 2015 at 5:09 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> >>>> wrote:
> >>>> > Igor,
> >>>> >
> >>>> > Got your question now. Coordinated point (maintenance) releases are
> >>>> > dropped.
> >>>> > [1] [2]
> >>>> >
> >>>> > [1]
> >>>> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
> >>>> > [2]
> >>>> >
> >>>> >
> https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fliberty_releases
> >>>> >
> >>>> > --
> >>>> > Best regards,
> >>>> > Oleg Gelbukh
> >>>> >
> >>>> > On Fri, Oct 16, 2015 at 3:30 PM, Igor Kalnitsky
> >>>> > <ikalnit...@mirantis.com>
> >>>> > wrote:
> >>>> >>
> >>>> >> Oleg,
> >>>> >>
> >>>> >> Yes, I know. Still you didn't answer my question - are they
> planning
> >>>> >> to release stable branches time-to-time? Like I said, Liberty is
> >>>> >> something similar 2015.2.0. How they will name release of something
> >>>> >> like 2015.2.1 (stable release, with bugfixes) ? Or they plan to
> drop
> >>>> >> it?
> >>>> >>
> >>>> >> Thanks,
> >>>> >> Igor
> >>>> >>
> >>>> >> On Fri, Oct 16, 2015 at 1:02 PM, Oleg Gelbukh <
> ogelb...@mirantis.com>
> >>>> >> wrote:
> >>>> >> > Igor,
> >>>> >> >
> >>>> >> > The point is that there's no 2015.2.0 version anywhere in
> >>>> >> > OpenStack. So
> >>>> >> > every component will be versioned separately, for example

Re: [openstack-dev] [Fuel][Fuel-Modularization] Proposal on Decoupling Serializers from Nailgun

2015-10-22 Thread Oleg Gelbukh
Hello,

We discussed this proposal in our team and came up with the following
vision of a configuration provisioning system:

- Installer is a system of multiple components: hardware inventory,
user interfaces, provisioning modules, deployment modules, checker modules,
volume manager, network manager, plugins, etc.
- Every component has its own data representation (we call them 'views'
as they provide an introspection in the configuration of the system), which
should include all the settings data the component should have access to to
perform its functions.
- Every component has 2 types of data in its view/representation:
authoritative data (which the component can modify) and external data
(which essentially is links to elements of another component's
view/representation).
- There is no 'universal' or 'general' representation of data which
serves a source of truth for all other views: every component is a source
of truth for its authoritative data.
- Views are defined as templates in some declarative language (YaML,
JSON, XML, %whatever%), think of jsonschema here. Authoritative settings of
the component have only type, external settings must also contain a link to
external view (might be just piece of code with properly referenced
elements of external view as parameters).
- View template shall be rendered in the data store during
'registration' of the component in the system, i.e. data structure shall be
created to represent the format of the data with necessary links.
- Views can be saved to the data store and modified by component that
'owns' the view's template, or via system's API. Changes to authoritative
settings in the view shall be propagated to all views that contain external
links to those settings.
- Both view template and views defined by it have versions. Template
version if defined by the version of it's owner component. View version
increases with every change made to it and can be used by the orchestrator
and component to determine if the async update of view was made by external
links.

We will continue to flesh it out as a specification in Fuel specs
repository. I will greatly appreciate any feedback on this vision,
including comments, objections, concerns and questions.

--
Best regards,
Oleg Gelbukh

On Tue, Oct 20, 2015 at 2:13 PM, Vladimir Kuklin <vkuk...@mirantis.com>
wrote:

> Folks
>
> Can we please stop using etherpad and move to some more usable thing as
> Google Docs? Etherpad seems too unusable for such discussion especially
> with this coloured formatting.
>
> Mike
>
> I currently see no need in following marketing trend for noSQL here - we
> need to store a set of structured data. This store should be the one that
> can be easily consumed directly or with some API wrapper. That is all. We
> will need to carefully evaluate each storage engine and decide which to
> pick. I personally insist on the engine that provides 100% consistency
> which is in fact opposite to what most of noSQL and distributed
> architectures provide. Nobody cares if you lose 1 billion of messages in a
> social network (even these messages authors) - this is almost all the time
> garbage with porn and cat pictures. Things will get worse if you destroy
> something in production serving accounting in your cloud due to the fact
> that nodes are
>
> I agree with option #2 - we actually should have task abstraction layer
> with drivers for execution, but I would go with baby steps for supporting
> other deployment tools - currently I do not see any benefit in using
> Ansible for tasks that Fuel is solving. The same is almost true for
> containers, but this is a different story.
>
> Eugene, Mike
>
> I agree with you that we need to think about where to execute these
> serializers. I think that we could do it the following way - serializer can
> be executed wherever it can actually work and it should possibly put data
> into centralized storage for the means of logging, control and accounting.
> I am not sure that this is the limitation case all the users will agree
> with, but we need to think of it.
>
> Regarding this 'last task throwing an exception issue' - we can handle
> this properly by simply rerunning the task that failed only due to
> serialization problem. Or even better - reorder its execution for later
> steps and try it again in a while if there are other tasks to be executed.
>
> But Mike's approach of data preparation prior to deployment/workflow
> transaction execution seems more viable. I think, we should follow the
> following one: "If you do not know the data before the transaction run,
> this data should be calculated after this transaction ends and this data
> should be used for another workflow in a different transaction".
>
>
> On Tue, Oct 20, 2015 at 1:20 PM, Evgeniy L <e...@mirantis.com&

Re: [openstack-dev] [Fuel][Fuel-Modularization] Proposal on Decoupling Serializers from Nailgun

2015-10-22 Thread Oleg Gelbukh
Hi Vladimir,

Thanks for prompt reply. Please, see my comments inline.

On Thu, Oct 22, 2015 at 12:44 PM, Vladimir Kuklin <vkuk...@mirantis.com>
wrote:

> Oleg
>
> Thank you for your feedback. IMO, the schema you are providing is very
> complex and would surely benefit from some examples.
>

I'm going to submit spec for review that will incorporate examples and
diagrams for sure. I expect to come up with it in couple of days, most
likely by Monday.

>
> If I understand correctly your proposal, you are trying to do the things
> that we actually want to get rid of - tight coupling and schema control of
> data that is being used by components.
>

Your understanding is mostly correct. However, important thing here is that
we propose API that will allow to adjust schema of the particular view at
any time or register a new schema (for new/added component), etc, (almost)
without writing Python code.


> There should be no cross-reference between components that do actual
> deployment. Instead, there should be a clear separation between layers of
> our deployment system.
>

Such a separation will not be enforced by the system we propose. However,
if we indeed have some 'hierarchy' of components, it will be naturally
reflected in the way the links are specified in templates. For example, if
our primary source of configuration settings is UI/API, then it will be
authoritative for configurable parameters, like backends selection, IP
address ranges, etc. However, settings that are discovered from actual
nodes shall be provided by corresponding components, like 'nailgun-agent'.

Deployment modules most likely won't be authoritative for any settings, as
far as I can tell at the moment. They could, however, provide feedback-like
parameters, for instance, those that can be calculated only in runtime.


>
> All the data that is provided to deployment (or
> provisioning/power_management/etc.) tasks should be accessible through API
> of the top-level components such as
> Network/Partitioning/IPAddressAllocation/ Manager or any other type of
> external configuration database such as ENC of external puppet
> master/LDAP/.
>

This very proposal is about creating such an API (whether service-like or
library-like) that other components and even end users can leverage to
access and manage configuration parameters. We probably should start with
library API, and decide whether we need service of this kind later.


>
>  Each task can use some code to transform this output to the
> representation that is actually needed for this particular task. Whenever a
> task transforms this data it can access API and do version negotiation, for
> example. Each time this transformation is performed this task can return
> the data to some storage that will save this data for sake of control and
> troubleshooting, such as, for example, user can always see which changes
> are going to be applied and decide what to do next.
>
> Also, this means that the process of data calculation itself is very
> 'lazy' or 'delayed', i. e. the data itself is calculated right at the
> beginning of deployment transaction, so that it is not locked to some
> particular details of deployment engine data processing and not prone to
> issues like 'oh, I cannot get VIP because it has not been allocated yet by
> Nailgun/oh, I cannot set it because it has already been set by Nailgun and
> there is no way to alter it'.
>

To me, the two paragraphs above a contradictory. If the data calculations
are lazy, I don't really see how one can introspect into changes that will
be applied by a component at any given run. You just don't have this
information, and you need to calculate it anyways to see which settings
will be passed to a component. Might be I got your point wrong here. Please
correct me if this is the case.

Thanks again, looking forward to hear from you.

--
Best regards,
Oleg Gelbukh


>
> On Thu, Oct 22, 2015 at 12:16 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
>
>> Hello,
>>
>> We discussed this proposal in our team and came up with the following
>> vision of a configuration provisioning system:
>>
>> - Installer is a system of multiple components: hardware inventory,
>> user interfaces, provisioning modules, deployment modules, checker modules,
>> volume manager, network manager, plugins, etc.
>> - Every component has its own data representation (we call them
>> 'views' as they provide an introspection in the configuration of the
>> system), which should include all the settings data the component should
>> have access to to perform its functions.
>> - Every component has 2 types of data in its view/representation:
>> authoritative data (which the component can modify) and external data
>> (which essentially is links 

Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-10-17 Thread Oleg Gelbukh
After closer look, the only viable option in closer term seems to be
'liberty-8.0' version. It does not to break comparisons that exist in the
code and allows for smooth transition.

--
Best regards,
Oleg Gelbukh

On Fri, Oct 16, 2015 at 5:35 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Oleg,
>
> Awesome! That's what I was looking for. :)
>
> - Igor
>
> On Fri, Oct 16, 2015 at 5:09 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
> > Igor,
> >
> > Got your question now. Coordinated point (maintenance) releases are
> dropped.
> > [1] [2]
> >
> > [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
> > [2]
> >
> https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fliberty_releases
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Fri, Oct 16, 2015 at 3:30 PM, Igor Kalnitsky <ikalnit...@mirantis.com
> >
> > wrote:
> >>
> >> Oleg,
> >>
> >> Yes, I know. Still you didn't answer my question - are they planning
> >> to release stable branches time-to-time? Like I said, Liberty is
> >> something similar 2015.2.0. How they will name release of something
> >> like 2015.2.1 (stable release, with bugfixes) ? Or they plan to drop
> >> it?
> >>
> >> Thanks,
> >> Igor
> >>
> >> On Fri, Oct 16, 2015 at 1:02 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> >> wrote:
> >> > Igor,
> >> >
> >> > The point is that there's no 2015.2.0 version anywhere in OpenStack.
> So
> >> > every component will be versioned separately, for example, in Libery,
> >> > Nova
> >> > has version 12.0.0, and minor release of it is going to have version
> >> > 12.0.1,
> >> > while Keystone, for instance, will have version 11.0.0 and 11.0.1 for
> >> > minor
> >> > release.
> >> >
> >> > The problem in Fuel is that coordinated release version is used in
> >> > several
> >> > places, the most important being installation path of the
> fuel-library.
> >> > We
> >> > won't be able to use it the same way since Liberty. I'd like to
> >> > understand
> >> > how we are going to handle that.
> >> >
> >> > My suggestion actually is to move away from using OpenStack version
> as a
> >> > part of Fuel version. Then the path to install the fuel-library will
> be
> >> > '/etc/puppet/8.0.0/'.
> >> >
> >> > --
> >> > Best regards,
> >> > Oleg Gelbukh
> >> >
> >> > On Fri, Oct 16, 2015 at 12:45 PM, Igor Kalnitsky
> >> > <ikalnit...@mirantis.com>
> >> > wrote:
> >> >>
> >> >> Hey Oleg,
> >> >>
> >> >> I've read the post [1] and I didn't get how exactly minor releases of
> >> >> *stable* branch will be versioned?
> >> >>
> >> >> Let's say 2015.2.0 is Liberty. How 2015.2.1 will be versioned?
> >> >>
> >> >> [1] http://ttx.re/new-versioning.html
> >> >>
> >> >> Thanks,
> >> >> Igor
> >> >>
> >> >>
> >> >> On Thu, Oct 15, 2015 at 6:59 PM, Oleg Gelbukh <ogelb...@mirantis.com
> >
> >> >> wrote:
> >> >> > Hello,
> >> >> >
> >> >> > I would like to highlight a problem that we are now going to have
> in
> >> >> > Fuel
> >> >> > regarding versioning of OpenStack.
> >> >> >
> >> >> > As you know, with introduction of the Big Tent policy it was
> decided
> >> >> > that
> >> >> > since Liberty dev cycle versioning schema of the whole project
> >> >> > changes.
> >> >> > Year-based versions won't be assigned to individual projects, nor
> the
> >> >> > coordinated release is going to have unified number [1]. Individual
> >> >> > projects
> >> >> > will have semver version numbers, while numbering of the release
> >> >> > itself
> >> >> > seems to be dropped.
> >> >> >
> >> >> > However, in Fuel there is a lot of places where we use year-based
> >> >> > version of
> >> >> > OpenStac

Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-10-17 Thread Oleg Gelbukh
In short, because of this:
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/db/sqlalchemy/models/release.py#L74-L99

Unless we use dashed 2-component version where OpenStack version comes
first, followed by version of Fuel, this will break creation of a cluster
with given release.

-Oleg

On Sat, Oct 17, 2015 at 10:24 PM, Sergii Golovatiuk <
sgolovat...@mirantis.com> wrote:

> Why can't we use 'liberty' without 8.0?
>
> On Sat, 17 Oct 2015 at 19:33, Oleg Gelbukh <ogelb...@mirantis.com> wrote:
>
>> After closer look, the only viable option in closer term seems to be
>> 'liberty-8.0' version. It does not to break comparisons that exist in the
>> code and allows for smooth transition.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Fri, Oct 16, 2015 at 5:35 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
>> wrote:
>>
>>> Oleg,
>>>
>>> Awesome! That's what I was looking for. :)
>>>
>>> - Igor
>>>
>>> On Fri, Oct 16, 2015 at 5:09 PM, Oleg Gelbukh <ogelb...@mirantis.com>
>>> wrote:
>>> > Igor,
>>> >
>>> > Got your question now. Coordinated point (maintenance) releases are
>>> dropped.
>>> > [1] [2]
>>> >
>>> > [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
>>> > [2]
>>> >
>>> https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fliberty_releases
>>> >
>>> > --
>>> > Best regards,
>>> > Oleg Gelbukh
>>> >
>>> > On Fri, Oct 16, 2015 at 3:30 PM, Igor Kalnitsky <
>>> ikalnit...@mirantis.com>
>>> > wrote:
>>> >>
>>> >> Oleg,
>>> >>
>>> >> Yes, I know. Still you didn't answer my question - are they planning
>>> >> to release stable branches time-to-time? Like I said, Liberty is
>>> >> something similar 2015.2.0. How they will name release of something
>>> >> like 2015.2.1 (stable release, with bugfixes) ? Or they plan to drop
>>> >> it?
>>> >>
>>> >> Thanks,
>>> >> Igor
>>> >>
>>> >> On Fri, Oct 16, 2015 at 1:02 PM, Oleg Gelbukh <ogelb...@mirantis.com>
>>> >> wrote:
>>> >> > Igor,
>>> >> >
>>> >> > The point is that there's no 2015.2.0 version anywhere in
>>> OpenStack. So
>>> >> > every component will be versioned separately, for example, in
>>> Libery,
>>> >> > Nova
>>> >> > has version 12.0.0, and minor release of it is going to have version
>>> >> > 12.0.1,
>>> >> > while Keystone, for instance, will have version 11.0.0 and 11.0.1
>>> for
>>> >> > minor
>>> >> > release.
>>> >> >
>>> >> > The problem in Fuel is that coordinated release version is used in
>>> >> > several
>>> >> > places, the most important being installation path of the
>>> fuel-library.
>>> >> > We
>>> >> > won't be able to use it the same way since Liberty. I'd like to
>>> >> > understand
>>> >> > how we are going to handle that.
>>> >> >
>>> >> > My suggestion actually is to move away from using OpenStack version
>>> as a
>>> >> > part of Fuel version. Then the path to install the fuel-library
>>> will be
>>> >> > '/etc/puppet/8.0.0/'.
>>> >> >
>>> >> > --
>>> >> > Best regards,
>>> >> > Oleg Gelbukh
>>> >> >
>>> >> > On Fri, Oct 16, 2015 at 12:45 PM, Igor Kalnitsky
>>> >> > <ikalnit...@mirantis.com>
>>> >> > wrote:
>>> >> >>
>>> >> >> Hey Oleg,
>>> >> >>
>>> >> >> I've read the post [1] and I didn't get how exactly minor releases
>>> of
>>> >> >> *stable* branch will be versioned?
>>> >> >>
>>> >> >> Let's say 2015.2.0 is Liberty. How 2015.2.1 will be versioned?
>>> >> >>
>>> >> >> [1] http://ttx.re/new-versioning.html
>>> >> >>
>>> >> >> Thanks,
>>> >> >> Igor
>>> >> >>
>>> >> >>
>>

Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-10-16 Thread Oleg Gelbukh
Igor,

The point is that there's no 2015.2.0 version anywhere in OpenStack. So
every component will be versioned separately, for example, in Libery, Nova
has version 12.0.0, and minor release of it is going to have version
12.0.1, while Keystone, for instance, will have version 11.0.0 and 11.0.1
for minor release.

The problem in Fuel is that coordinated release version is used in several
places, the most important being installation path of the fuel-library. We
won't be able to use it the same way since Liberty. I'd like to understand
how we are going to handle that.

My suggestion actually is to move away from using OpenStack version as a
part of Fuel version. Then the path to install the fuel-library will be
'/etc/puppet/8.0.0/'.

--
Best regards,
Oleg Gelbukh

On Fri, Oct 16, 2015 at 12:45 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Hey Oleg,
>
> I've read the post [1] and I didn't get how exactly minor releases of
> *stable* branch will be versioned?
>
> Let's say 2015.2.0 is Liberty. How 2015.2.1 will be versioned?
>
> [1] http://ttx.re/new-versioning.html
>
> Thanks,
> Igor
>
>
> On Thu, Oct 15, 2015 at 6:59 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
> > Hello,
> >
> > I would like to highlight a problem that we are now going to have in Fuel
> > regarding versioning of OpenStack.
> >
> > As you know, with introduction of the Big Tent policy it was decided that
> > since Liberty dev cycle versioning schema of the whole project changes.
> > Year-based versions won't be assigned to individual projects, nor the
> > coordinated release is going to have unified number [1]. Individual
> projects
> > will have semver version numbers, while numbering of the release itself
> > seems to be dropped.
> >
> > However, in Fuel there is a lot of places where we use year-based
> version of
> > OpenStack release. [2] How are we going to handle this? Shall we have
> > openstack_version: 2015.2 all over the place? Or we should come up with
> > something more sophisticated? Or just drop OpenStack version component
> from
> > our versioning schema for good?
> >
> > Please, share your opinions here or in corresponding reviews.
> >
> > [1] http://ttx.re/new-versioning.html
> > [2] https://review.openstack.org/#/c/234296/
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-10-16 Thread Oleg Gelbukh
Igor,

Got your question now. Coordinated point (maintenance) releases are
dropped. [1] [2]

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
[2]
https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fliberty_releases

--
Best regards,
Oleg Gelbukh

On Fri, Oct 16, 2015 at 3:30 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Oleg,
>
> Yes, I know. Still you didn't answer my question - are they planning
> to release stable branches time-to-time? Like I said, Liberty is
> something similar 2015.2.0. How they will name release of something
> like 2015.2.1 (stable release, with bugfixes) ? Or they plan to drop
> it?
>
> Thanks,
> Igor
>
> On Fri, Oct 16, 2015 at 1:02 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> wrote:
> > Igor,
> >
> > The point is that there's no 2015.2.0 version anywhere in OpenStack. So
> > every component will be versioned separately, for example, in Libery,
> Nova
> > has version 12.0.0, and minor release of it is going to have version
> 12.0.1,
> > while Keystone, for instance, will have version 11.0.0 and 11.0.1 for
> minor
> > release.
> >
> > The problem in Fuel is that coordinated release version is used in
> several
> > places, the most important being installation path of the fuel-library.
> We
> > won't be able to use it the same way since Liberty. I'd like to
> understand
> > how we are going to handle that.
> >
> > My suggestion actually is to move away from using OpenStack version as a
> > part of Fuel version. Then the path to install the fuel-library will be
> > '/etc/puppet/8.0.0/'.
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Fri, Oct 16, 2015 at 12:45 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> > wrote:
> >>
> >> Hey Oleg,
> >>
> >> I've read the post [1] and I didn't get how exactly minor releases of
> >> *stable* branch will be versioned?
> >>
> >> Let's say 2015.2.0 is Liberty. How 2015.2.1 will be versioned?
> >>
> >> [1] http://ttx.re/new-versioning.html
> >>
> >> Thanks,
> >> Igor
> >>
> >>
> >> On Thu, Oct 15, 2015 at 6:59 PM, Oleg Gelbukh <ogelb...@mirantis.com>
> >> wrote:
> >> > Hello,
> >> >
> >> > I would like to highlight a problem that we are now going to have in
> >> > Fuel
> >> > regarding versioning of OpenStack.
> >> >
> >> > As you know, with introduction of the Big Tent policy it was decided
> >> > that
> >> > since Liberty dev cycle versioning schema of the whole project
> changes.
> >> > Year-based versions won't be assigned to individual projects, nor the
> >> > coordinated release is going to have unified number [1]. Individual
> >> > projects
> >> > will have semver version numbers, while numbering of the release
> itself
> >> > seems to be dropped.
> >> >
> >> > However, in Fuel there is a lot of places where we use year-based
> >> > version of
> >> > OpenStack release. [2] How are we going to handle this? Shall we have
> >> > openstack_version: 2015.2 all over the place? Or we should come up
> with
> >> > something more sophisticated? Or just drop OpenStack version component
> >> > from
> >> > our versioning schema for good?
> >> >
> >> > Please, share your opinions here or in corresponding reviews.
> >> >
> >> > [1] http://ttx.re/new-versioning.html
> >> > [2] https://review.openstack.org/#/c/234296/
> >> >
> >> > --
> >> > Best regards,
> >> > Oleg Gelbukh
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] OpenStack versioning in Fuel

2015-10-15 Thread Oleg Gelbukh
Hello,

I would like to highlight a problem that we are now going to have in Fuel
regarding versioning of OpenStack.

As you know, with introduction of the Big Tent policy it was decided that
since Liberty dev cycle versioning schema of the whole project changes.
Year-based versions won't be assigned to individual projects, nor the
coordinated release is going to have unified number [1]. Individual
projects will have semver version numbers, while numbering of the release
itself seems to be dropped.

However, in Fuel there is a lot of places where we use year-based version
of OpenStack release. [2] How are we going to handle this? Shall we have
openstack_version: 2015.2 all over the place? Or we should come up with
something more sophisticated? Or just drop OpenStack version component from
our versioning schema for good?

Please, share your opinions here or in corresponding reviews.

[1] http://ttx.re/new-versioning.html
[2] https://review.openstack.org/#/c/234296/

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core Reviewers groups restructure

2015-09-21 Thread Oleg Gelbukh
FYI, we have a separate core group for stackforge/fuel-octane repository
[1].

I'm supporting the move to modularization of Fuel with cleaner separation
of authority and better defined interfaces. Thus, I'm +1 to such a change
as a part of that move.

[1] https://review.openstack.org/#/admin/groups/1020,members

--
Best regards,
Oleg Gelbukh

On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov <mscherba...@mirantis.com>
wrote:

> Hi all,
> as of my larger proposal on improvements to code review workflow [1], we
> need to have cores for repositories, not for the whole Fuel. It is the path
> we are taking for a while, and new core reviewers added to specific repos
> only. Now we need to complete this work.
>
> My proposal is:
>
>1. Get rid of one common fuel-core [2] group, members of which can
>merge code anywhere in Fuel. Some members of this group may cover a couple
>of repositories, but can't really be cores in all repos.
>2. Extend existing groups, such as fuel-library [3], with members from
>fuel-core who are keeping up with large number of reviews / merges. This
>data can be queried at Stackalytics.
>3. Establish a new group "fuel-infra", and ensure that it's included
>into any other core group. This is for maintenance purposes, it is expected
>to be used only in exceptional cases. Fuel Infra team will have to decide
>whom to include into this group.
>4. Ensure that fuel-plugin-* repos will not be affected by removal of
>fuel-core group.
>
> #2 needs specific details. Stackalytics can show active cores easily, we
> can look at people with *:
> http://stackalytics.com/report/contribution/fuel-web/180. This is for
> fuel-web, change the link for other repos accordingly. If people are added
> specifically to the particular group, leaving as is (some of them are no
> longer active. But let's clean them up separately from this group
> restructure process).
>
>- fuel-library-core [3] group will have following members: Bogdan D.,
>Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
>- fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
>Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
>- fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
>- fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
>- fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
>Urlapova
>- fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
>Konstantinov, Olga Gusarenko
>- fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry
>Pyzhov, Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
>- fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
>- fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
>Sledzinsky, Dmitry Shulyak
>- fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
>- fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
>Urlapova
>- fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly
>Kramskikh
>- fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey
>Kasatkin (this project seems to be dead, let's consider to rip it off)
>- fuel-specs-core: there is no such a group at the moment. I propose
>to create one with following members, based on stackalytics data [16]:
>Vitaly Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir
>Kuklin, Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko,
>Mike Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge
>after Fuel PTL/Component Leads elections
>- fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
>Gelbukh, Ilya Kharin
>- fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly
>Parakhin
>- fuel-upgrade-core: needs to be created. Sebastian Kalinowski, Alex
>Schultz, Evgeny Li, Igor Kalnitsky
>- fuel-provision: repo seems to be outdated, needs to be removed.
>
> I suggest to make changes in groups first, and then separately address
> specific issues like removing someone from cores (not doing enough reviews
> anymore or too many positive reviews, let's say > 95%).
>
> I hope I don't miss anyone / anything. Please check carefully.
> Comments / objections?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> [2] https://review.openstack.org/#/admin/groups/209,members
> [3] https://review.openstack.org/#/admin/groups/658,members
> [4] https://review.openstack.org/#/admin/groups/664,members
> [5] https://review.openstack.org/#/admin/groups/655,members
> [6] https://review.openstack.org/#/admin/groups/646,members
> [

Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-10 Thread Oleg Gelbukh
The reason people want offline deployment feature is not because of poor
connection, but rather the enterprise intranets where getting subnet with
external access sometimes is a real pain in various body parts.

--
Best regards,
Oleg Gelbukh

On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Hello,
>
> I agree with Vladimir - the idea of online repos is a right way to
> move. In 2015 I believe we can ignore this "poor Internet connection"
> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> distributives - most of them fetch needed packages from the Internet
> during installation, not from CD/DVD. The netboot installers are
> popular, I can't even remember when was the last time I install my
> Debian from the DVD-1 - I use netboot installer for years.
>
> Thanks,
> Igor
>
>
> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <yt...@mirantis.com> wrote:
> >
> >
> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschu...@mirantis.com>
> wrote:
> >>
> >>
> >> Hey Vladimir,
> >>
> >>>
> >>>
> >>>>>
> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>>>> complicated flow, less errors, easier to maintain, easier to
> understand,
> >>>>> easier to troubleshoot
> >>>>> 2) If one wants to have local mirror, the flow is the same as in case
> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
> >>>>> understand.
> >>>>
> >>>>
> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>>> forward and has some issues making it a bad UX.
> >>>
> >>>
> >>> I'd say the whole approach of having such tool as fuel-createmirror is
> a
> >>> way too naive. Reliable internet connection is totally up to network
> >>> engineering rather than deployment. Even using proxy is much better
> that
> >>> creating local mirror. But this discussion is totally out of the scope
> of
> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> straightforward (installed as rpm, has just a couple of command line
> >>> options). The quality of this script is also out of the scope of this
> >>> thread. BTW we have plans to improve it.
> >>
> >>
> >>
> >> Fair enough, I just wanted to raise the UX issues around these types of
> >> things as they should go into the decision making process.
> >>
> >>
> >>>
> >>>>>
> >>>>>
> >>>>> Many people still associate ISO with MOS, but it is not true when
> using
> >>>>> package based delivery approach.
> >>>>>
> >>>>> It is easy to define necessary repos during deployment and thus it is
> >>>>> easy to control what exactly is going to be installed on slave nodes.
> >>>>>
> >>>>> What do you guys think of it?
> >>>>>
> >>>>>
> >>>>
> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> many
> >>>> large users, complete access to the internet is not available or not
> >>>> desired.  If we want to continue down this path, we need to improve
> the
> >>>> tools to setup the local mirror and properly document what
> urls/ports/etc
> >>>> need to be available for the installation of openstack and any mirror
> >>>> creation process.  The ideal thing is to have an all-in-one CD
> similar to a
> >>>> live cd that allows a user to completely try out fuel wherever they
> want
> >>>> with out further requirements of internet access.  If we don't want to
> >>>> continue with that, we need to do a better job around providing the
> tools
> >>>> for a user to get up and running in a timely fashion.  Perhaps
> providing an
> >>>> net-only iso and an all-included iso would be a better solution so
> people
> >>>> will have their expectations properly set up front?
> >>>
> >>>
> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> 1) I don't see any reason why we should treat MOS  repo other way than
> >>> all other online repos. A user sees on the settings tab the list of
> repos
> >>> one of which is local by default whi

Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-10 Thread Oleg Gelbukh
Alex,

I absolutely understand the point you are making about need for deployment
engineers to modify things 'on the fly' in customer environment. It's makes
things really flexible and lowers the entry barrier for sure.

However, I would like to note that in my opinion this kind on 'monkey
patching' is actually a bad practice for any environments other than dev
ones. It immediately leads to emergence of unsupportable frankenclouds. I
would greet any modification to the workflow that will discourage people
from doing that.

--
Best regards,
Oleg Gelbukh

On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz <aschu...@mirantis.com> wrote:

> Hey Vladimir,
>
>
>
>> Regarding plugins: plugins are welcome to install specific additional
>> DEB/RPM repos on the master node, or just configure cluster to use
>> additional onl?ne repos, where all necessary packages (including plugin
>> specific puppet manifests) are to be available. Current granular deployment
>> approach makes it easy to append specific pre-deployment tasks
>> (master/slave does not matter). Correct me if I am wrong.
>>
>>
> Don't get me wrong, I think it would be good to move to a fuel-library
> distributed via package only.  I'm bringing these points up to indicate
> that there is many other things that live in the fuel library puppet path
> than just the fuel-library package.  The plugin example is just one place
> that we will need to invest in further design and work to move to the
> package only distribution.  What I don't want is some partially executed
> work that only works for one type of deployment and creates headaches for
> the people actually having to use fuel.  The deployment engineers and
> customers who actually perform these actions should be asked about
> packaging and their comfort level with this type of requirements.  I don't
> have a complete understanding of the all the things supported today by the
> fuel plugin system so it would be nice to get someone who is more familiar
> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> don't think we are building fuel-library debs at this time either.  So
> without some work on both sides, we cannot move to just packages.
>
>
>> Regarding flexibility: having several versioned directories with puppet
>> modules on the master node, having several fuel-libraryX.Y packages
>> installed on the master node makes things "exquisitely convoluted" rather
>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>> rsync, etc. if you really need to do things manually. But we have
>> convenient service (Perestroika) which builds packages in minutes if you
>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>> available as an application independent from CI. So, what is wrong with
>> building fuel-library package? What if you want to troubleshoot nova (we
>> install it using packages)? Should we also use rsync for everything else
>> like nova, mysql, etc.?
>>
>>
> Yes, we do have a service like Perestroika to build packages for us.  That
> doesn't mean everyone else does or has access to do that today.  Setting up
> a build system is a major undertaking and making that a hard requirement to
> interact with our product may be a bit much for some customers.  In
> speaking with some support folks, there are times when files have to be
> munged to get around issues because there is no package or things are on
> fire so they can't wait for a package to become available for a fix.  We
> need to be careful not to impose limits without proper justification and
> due diligence.  We already build the fuel-library package, so there's no
> reason you couldn't try switching the rsync to install the package if it's
> available on a mirror.  I just think you're going to run into the issues I
> mentioned which need to be solved before we could just mark it done.
>
> -Alex
>
>
>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschu...@mirantis.com>
>> wrote:
>>
>>> I agree that we shouldn't need to sync as we should be able to just
>>> update the fuel-library package. That being said, I think there might be a
>>> few issues with this method. The first issue is with plugins and how to
>>> properly handle the distribution of the plugins as they may also include
>>> puppet code that needs to be installed on the other nodes for a deployment.
>>> Currently I do not believe we install the plugin packages anywhere except
>>> the master and when they do get installed there may be some post-install
>>> actions that are only valid for the master.  Another issue is being
>>&

Re: [openstack-dev] [fuel] [upgrade] Cluster Upgrade - FFE status

2015-08-03 Thread Oleg Gelbukh
Eugene,

To increase focus and ensure velocity increase, we decided to descope the
following patch from the current release cycle [1]. This willl have minor
effect on supported use cases, but will allow us to focus on the required
parts [2] [3].

[1] https://review.openstack.org/#/c/203537/
[2] https://review.openstack.org/#/c/202969/
[3] https://review.openstack.org/#/c/203536/

--
Best regards,
Oleg Gelbukh

On Mon, Aug 3, 2015 at 2:47 PM, Eugene Bogdanov ebogda...@mirantis.com
wrote:

 Oleg, thanks for the provided information. As discussed verbally, most
 core reviewers are now busy with fixing critical bugs, so you might get
 delayed responses. Keeping in mind, let's target Wednesday, Aug 5 as the
 last deadline. With that said, let's minimize review iterations so core
 reviewers avoid unnecessary focus switch.

 --
 EugeneB



 Oleg Gelbukh ogelb...@mirantis.com
 31 июля 2015 г., 23:00
 Team,

 I'd like to inform you about status of FF Exception for Cluster Upgrade
 feature.

 The due date for the exception was Jul 30th. We had 4 patches to merge in
 the beginning of the work.

 During review it was decided that changes to the core of Nailgun should be
 split into separate CR, making it 5 patches. In a course of development we
 also hit a glitch in deep layers of networking modules of Nailgun [1]. This
 slowed down the development and review process, and as a result we only
 merged 1 patch out of 5 by the due date of the FFE.

 I would like to ask for 4 more days of FFE, which effectively will move
 the due date to Aug 3rd.

 [1] https://bugs.launchpad.net/fuel/+bug/1480228

 --
 Best regards,
 Oleg Gelbukh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] [upgrade] Cluster Upgrade - FFE status

2015-07-31 Thread Oleg Gelbukh
Team,

I'd like to inform you about status of FF Exception for Cluster Upgrade
feature.

The due date for the exception was Jul 30th. We had 4 patches to merge in
the beginning of the work.

During review it was decided that changes to the core of Nailgun should be
split into separate CR, making it 5 patches. In a course of development we
also hit a glitch in deep layers of networking modules of Nailgun [1]. This
slowed down the development and review process, and as a result we only
merged 1 patch out of 5 by the due date of the FFE.

I would like to ask for 4 more days of FFE, which effectively will move the
due date to Aug 3rd.

[1] https://bugs.launchpad.net/fuel/+bug/1480228

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][python-fuelclient] Implementing new commands

2015-07-24 Thread Oleg Gelbukh
FWIW, I'm for option B, combined with clear timeline for porting features
of fuel-variant to fuel2. For example, we are developing client-side
functions for fuel-octane (cluster upgrade) extensions only for fuel2, and
don't plan to implement it for 'fuel'.

The main reason why we can't just drop 'fuel', or rather switch it to fuel2
syntax, IMO, is the possibility that someone somewhere uses its current
syntax for automation. However, if the function is completely new, the
automation of this function should be implemented with the new version of
syntax.

--
Best regards,
Oleg Gelbukh

On Fri, Jul 24, 2015 at 12:09 PM, Fedor Zhadaev fzhad...@mirantis.com
wrote:

 Hi all,

 I think that in current situation the best solution will be to add new
 features for the both versions of client. It may cause a little slowing
 down of developing each feature, but we won't have to return to them in
 future.

 2015-07-24 11:58 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com:

 Hello,

 My 2 cents on it.

 Following plan C requires a huge effort from developer, and it may be
 unacceptable when FF is close and there're a lot of work to do. So it
 looks like the plan B is most convenient for us and eventually we will
 have all features in fuel2.

 Alternatively we can go with C.. but only if implementing support in
 either fuel or fuel2 may be postponed to SCF.

 Thanks,
 Igor

 On Fri, Jul 24, 2015 at 10:58 AM, Evgeniy L e...@mirantis.com wrote:
  Hi Sebastian, thanks for clarification, in this case I think we
  should follow plan C, new features should not slow us down
  in migration from old to new version of the client.
 
  On Thu, Jul 23, 2015 at 8:52 PM, Sebastian Kalinowski
  skalinow...@mirantis.com wrote:
 
  2015-07-23 18:28 GMT+02:00 Stanislaw Bogatkin sbogat...@mirantis.com
 :
 
  Hi,
 
  can we just add all needed functionality from old fuel client that
 fuel2
  needs, then say that old fuel-client is deprecated now and will not
 support
  some new features, then add new features to fuel2 only? It seems like
 best
  way for me, cause with this approach:
  1. Clients will can use only one version of client (new one) w/o
  switching between 2 clients with different syntax
  2. We won't have to add new features to two clients.
 
 
  Stas, of course moving it all to new fuel2 would be the best way to do
 it,
  but this refactoring took place in previous release. There is no one
 that
  ported a single command (except new ones) since then and there is no
 plan
  for doing so since other activities have higher priority. And features
 are
  still coming so it would be nice to have a policy for the time all
 commands
  will move to new fuel2.
 
 
 
  On Thu, Jul 23, 2015 at 9:19 AM, Evgeniy L e...@mirantis.com wrote:
 
  Hi,
 
  The best option is to add new functionality to fuel2 only, but I
  don't think that we can do that if there is not enough functionality
  in fuel2, we should not ask user to switch between fuel and fuel2
  to get some specific functionality.
  Do we have some list of commands which is not covered in fuel2?
  I'm just wondering how much time will it take to implement all
  required commands in fuel2.
 
 
  So to compare: this is a help message for old fuel [1] and new
 fuel2
  [2]. There are only node, env and task actions covered and even
 they
  are not covered in 100%.
 
  [1] http://paste.openstack.org/show/404439/
  [2] http://paste.openstack.org/show/404440/
 
 
 
 
  Thanks,
 
  On Thu, Jul 23, 2015 at 1:51 PM, Sebastian Kalinowski
  skalinow...@mirantis.com wrote:
 
  Hi folks,
 
  For a some time in python-fuelclient we have two CLI apps: `fuel`
 and
  `fuel2`. It was done as an implementation of blueprint [1].
  Right now there is a situation where some new features are added
 just
  to old `fuel`, some to just `fuel2`, some to both. We cannot simply
 switch
  completely to new `fuel2` as it doesn't cover all old commands.
  As far as I remember there was no agreement how we should proceed
 with
  adding new things to python-fuelclient, so to keep all development
 for new
  commands I would like us to choose what will be our approach. There
 are 3
  ways to do it (with some pros and cons):
 
  A) Add new features only to old `fuel`.
  Pros:
   - Implement feature in one place
   - Almost all features are covered there
  Cons:
   - Someone will need to port this features to new `fuel2`
   - Issues that forced us to reimplement whole `fuel` as `fuel2`
 
  B) Add new features only to new `fuel2`
  Pros:
   - Implement feature in one place
   - No need to cope with issues in old `fuel` (like worse UX, etc.)
  Cons:
   - Not all features are covered by `fuel2` so user will need to
 switch
  between `fuel` and `fuel2`
 
  C) Add new features to both CLIs
  Pros:
   - User can choose which tool to use
   - No need to port feature later...
  Cons:
   - ...but it still doubles the work
   - We keep alive a tool that should be replaced (old `fuel`)
 
 
  Best,
  Sebastian
 
  [1]
 https

Re: [openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-24 Thread Oleg Gelbukh
The problem is that hostnames of nodes appear in /etc/hosts files, and
entries in those files have to be unique to make any sense. Thus, we either
need to provide a user with ability to create their own generators of node
names (not sure that's makes sense), require a user to provide a name for
every node and validate that every name is unique (which I guess the
blueprint in question implies), or provide ability to generate node
hostname by some user-defined template (prefix) suffixed with ID-based
iterator. We should choose one of options for every environment and don't
mix them, as Fedor pointed out.

-Oleg

On Fri, Jul 24, 2015 at 12:07 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Andrew,

 I don't agree with you, user should be able to name the node any way he
 wants why there should be a constraint which is related to some internal id
 in Nailgun database? For example if he deleted node-5 and then he wants
 to replace this node with another one, he can and should be able to provide
 for this replacement node hostname node-5, even if node's id in the
 database
 is 6.

 Thanks,

 On Fri, Jul 24, 2015 at 2:36 AM, Andrew Woodward xar...@gmail.com wrote:



 On Wed, Jul 22, 2015 at 6:32 AM Fedor Zhadaev fzhad...@mirantis.com
 wrote:

 Thanks for your answers.

 Let me clarify some points:

 Sure, we have to validate hostnames during node renaming. And sure we do
 it. This issue appears when we already have node with name 'node-X' and new
 node is created without providing custom name. I'll give you the example:

 1. User has node with hostname 'node-4' (with ID = 4; and there no nodes
 with ID  4) ;
 2. He renames it in 'node-5' (this name is correct and unique. OK)
 3. He adds new node without providing custom hostname.
 New node gets ID = 5 (it's primary key and increments automatically)
 and default hostname 'node-5'. (Here we have a problem with uniqueness.)

 It will be strange if we refuse to create node with default name if
 somebody has renamed another node using this name.

 About nodes hostnames. Actually we can't refuse to use custom hostnames
 in format 'node-{#}' because it is one of the main use cases. So we need to
 find the solution which accepts such renaming.

 How is this a main use case? This is exactly what we should not support.
 If they want the node to have 'node-5' as it's hostname we need them to be
 node.id = 5 (IE the node id in the DB is 5) They would not need custom
 node naming in this case.


 2015-07-22 12:42 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com:

 Hi guys,

 @Sergii, it looks like you misunderstood something. `node-uuid` is not
 a general use case. It's only about conflicting nodes, and I'm sure
 everyone's going to change such a hostname in order to avoid
 confusion.

 @Andrew,

 a) Database refuses hostnames that break unique constraint, sot it'll
 work out-of-box.

 b) I like this idea. I think refusing `node-id` where `id` is not
 actually a node id is good idea. It solves our problem.

 Thanks,
 Igor

 On Wed, Jul 22, 2015 at 8:21 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  node-uuid is very terrible from UX perspective of view. Ask support
 people
  if they are comfortable to ssh such nodes or telling the name in phone
  conversation with customer. If we cannot validate FQDN of hostname I
 would
  slip this feature to next release where we can pay more attention to
  details.
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards,
 Fedor Zhadaev
 Junior Software Engineer, Mirantis Inc.
 Skype: zhadaevfm
 E-mail: fzhad...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --

 --

 Andrew Woodward

 Mirantis

 Fuel Community Ambassador

 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [Fuel] Question about unique default hostname for node

2015-07-24 Thread Oleg Gelbukh
Evgeniy,

The replacement node use case seems significantly different from node
renaming
case to me. It's not only about the hostname of the node. I guess that
eventually
we'll have to invent a way to retain other metadata of the original node,
not only
a hostname. The described use case is more like 'node reinstallation'
feautre [1].

[1] https://blueprints.launchpad.net/fuel/+spec/mos-node-reinstallation

--
Best regards,
Oleg Gelbukh

On Fri, Jul 24, 2015 at 12:07 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Andrew,

 I don't agree with you, user should be able to name the node any way he
 wants why there should be a constraint which is related to some internal id
 in Nailgun database? For example if he deleted node-5 and then he wants
 to replace this node with another one, he can and should be able to provide
 for this replacement node hostname node-5, even if node's id in the
 database
 is 6.

 Thanks,

 On Fri, Jul 24, 2015 at 2:36 AM, Andrew Woodward xar...@gmail.com wrote:



 On Wed, Jul 22, 2015 at 6:32 AM Fedor Zhadaev fzhad...@mirantis.com
 wrote:

 Thanks for your answers.

 Let me clarify some points:

 Sure, we have to validate hostnames during node renaming. And sure we do
 it. This issue appears when we already have node with name 'node-X' and new
 node is created without providing custom name. I'll give you the example:

 1. User has node with hostname 'node-4' (with ID = 4; and there no nodes
 with ID  4) ;
 2. He renames it in 'node-5' (this name is correct and unique. OK)
 3. He adds new node without providing custom hostname.
 New node gets ID = 5 (it's primary key and increments automatically)
 and default hostname 'node-5'. (Here we have a problem with uniqueness.)

 It will be strange if we refuse to create node with default name if
 somebody has renamed another node using this name.

 About nodes hostnames. Actually we can't refuse to use custom hostnames
 in format 'node-{#}' because it is one of the main use cases. So we need to
 find the solution which accepts such renaming.

 How is this a main use case? This is exactly what we should not support.
 If they want the node to have 'node-5' as it's hostname we need them to be
 node.id = 5 (IE the node id in the DB is 5) They would not need custom
 node naming in this case.


 2015-07-22 12:42 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com:

 Hi guys,

 @Sergii, it looks like you misunderstood something. `node-uuid` is not
 a general use case. It's only about conflicting nodes, and I'm sure
 everyone's going to change such a hostname in order to avoid
 confusion.

 @Andrew,

 a) Database refuses hostnames that break unique constraint, sot it'll
 work out-of-box.

 b) I like this idea. I think refusing `node-id` where `id` is not
 actually a node id is good idea. It solves our problem.

 Thanks,
 Igor

 On Wed, Jul 22, 2015 at 8:21 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  node-uuid is very terrible from UX perspective of view. Ask support
 people
  if they are comfortable to ssh such nodes or telling the name in phone
  conversation with customer. If we cannot validate FQDN of hostname I
 would
  slip this feature to next release where we can pay more attention to
  details.
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards,
 Fedor Zhadaev
 Junior Software Engineer, Mirantis Inc.
 Skype: zhadaevfm
 E-mail: fzhad...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --

 --

 Andrew Woodward

 Mirantis

 Fuel Community Ambassador

 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [fuel] [FFE] FF Exception request for Env Upgrade feature

2015-07-24 Thread Oleg Gelbukh
Alexey,

Thank you for the vote. We expect to spend 2 more days on patch [1] and
[2], and then another 2 days to finish patch [3]. With reviewers help, [1]
and [2] will land by Wednesday, while [3] is due Thursday.

[1] https://review.openstack.org/#/c/202969/
[2] https://review.openstack.org/#/c/203537/
[3] https://review.openstack.org/#/c/203536/

--
Best regards,
Oleg Gelbukh

On Fri, Jul 24, 2015 at 3:26 PM, Aleksey Kasatkin akasat...@mirantis.com
wrote:

 +1 for an exception. Do we have time estimate though?


 Aleksey Kasatkin


 On Fri, Jul 24, 2015 at 2:46 PM, Sebastian Kalinowski 
 skalinow...@mirantis.com wrote:

 +1 for this exception - as Evgeniy said it is developed not in the core
 but in extension and risk is low.

 2015-07-24 10:17 GMT+02:00 Evgeniy L e...@mirantis.com:

 Hi,

 If we have a rule that feature freeze exceptions should have essential
 priority,
 I'm not sure if it matters how risky it's, the risk is low, but it's not
 zero.

 Thanks,

 On Thu, Jul 23, 2015 at 9:09 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Oleg,
 considering that your feature is essential for the release, sounds like
 there is no way we can't give an exception.

 I'm glad that it's perceived by low risk by core reviewer from Nailgun
 team (Evgeny). If there are no concerns from other, then we are giving FF
 exception. However, I'd like to understand how much it will take to finish
 this work and additional resources required.

 We need to switch to bugfix work, and the more we continue working on
 features / enhancements, the less confidence I have that we can meet HCF
 deadline.

 Thanks,

 On Thu, Jul 23, 2015 at 11:00 AM Evgeniy L e...@mirantis.com wrote:

 Hi,

 The patch into Nailgun requires additional work to do, but as far as I
 can see
 it doesn't affect any other parts of the system, also it's implemented
 as an
 extension, which means if the feature will introduce bugs which we
 won't
 be able to fix in a required time, it can be easily disabled without
 removing from
 master with just removing one line from a file [1] (removing it from
 extensions list).

 So I think it's ok to accept environment upgrade feature as an
 exception for feature
 freeze.

 Thanks,

 [1]
 https://review.openstack.org/#/c/202969/7/nailgun/nailgun/extensions/base.py

 On Wed, Jul 22, 2015 at 10:18 PM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Team,

 I would like to request an exception from the Feature Freeze for
 Environment Upgrade extensions added to the Nailgun API [1]. The Nailgun
 side of the feature is implemented in the following CRs:


 https://review.openstack.org/#/q/status:open+topic:bp/nailgun-api-env-upgrade-extensions,n,z

 These changes are implemented as an extension [2] to the Nailgun. It
 barely touches the core code and doesn't change the existing 
 functionality.

 Please, respond if you have any questions or concerns related to this
 request.

 Thanks in advance.

 [1] https://review.openstack.org/#/c/192551/
 [2]
 https://review.openstack.org/#/q/topic:bp/volume-manager-refactoring,n,z

 --
 Best regards,
 Oleg Gelbukh


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack

Re: [openstack-dev] [Fuel] Get rid of fuelmenu

2015-07-23 Thread Oleg Gelbukh
Unless I am mistaken, it is possible to set most of the parameters
supported by Fuel menu as kernel boot parameters. Isn't it sufficient
replacement for fuelmenu for dev's purposes?

-Oleg

On Thu, Jul 23, 2015 at 4:05 PM, Matthew Mosesohn mmoses...@mirantis.com
wrote:

 How much effort are we spending? I'm not so sure it's a major development
 drain.

 Since Fuel 6.0 dev cycle (Sept 2014) until now there have been 34
 commits into Fuelmenu:
 * New features/functionality: 12
 * Bugfix: 15
 * Other: 7 (version bumps, and commits without bug ID)

 Across 3 releases, that's only ~11 commits per release. We've added
 features like generating random passwords for services, warnings about
 setting credentials apart from the default, adding a hook for CI for
 testing custom manifests on Fuel Master, and duplicate IP address
 checks.

 These improved user experience. If you take it away and replace it
 with a config file with basic validation, we will see users fail to
 deploy due to things that Fuelmenu already checks easily. Imagine
 you're an existing user of Fuel and suddenly you install the newest
 version of Fuel and see a large configuration file which you have to
 set by hand. Here's a relic of what users used to have to configure by
 hand:

 https://github.com/stackforge/fuel-library/blob/b015ed975b58dddff3b8da0ce34d9a638c22d032/deployment/puppet/openstack/examples/site_simple.pp

 Am I alone in thinking it's not the best use of our development
 resources to throw it away and replace it with a text file that is
 edited by hand?

 On Thu, Jul 23, 2015 at 3:33 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
  Hello,
 
  Here's my 2 cents on it.
 
  I think the effort we put to support fuelmenu doesn't worth it. I used
  to deploy fuel too often in previous release, and I never used
  features of fuelmenu? Why? Because I prefer to apply changes on
  already deployed node. Moreover, I don't like that users are prompted
  with fuelmenu by default. I want to deploy fuel automatically, without
  any manual actions (though it's another topic).
 
  I'm agree with Vladimir, vim + config files are enough, since Fuel is
  not a product for housewives. It's a product for those who do not
  hesitate to use Vim for soft configuration.
 
  Thanks,
  Igor
 
 
 
  On Thu, Jul 23, 2015 at 2:27 PM, Matthew Mosesohn
  mmoses...@mirantis.com wrote:
  We had that before and had very poor validation. Removing fuelmenu
  would make the experience quite manual and prone to errors.
 
  This topic comes up once a year only from Fuel Python developers
  because they rarely use it and no dev cycles have been invested in
  improving it.
 
  The actual Fuel deployers use it and appreciate its validation and
  wish to extend it.
 
  I'd like to hear more feedback.
 
  On Thu, Jul 23, 2015 at 2:23 PM, Vladimir Kozhukalov
  vkozhuka...@mirantis.com wrote:
  Dear colleagues,
 
  What do you think of getting rid of fuelmenu and substituting it with
  thoroughly commented text file + some validation + vim? The major pro
 of
  this is that text file is easier to extend and edit. Many people
 prefer vim
  UX instead of wandering through the semi-graphical interface. And it
 is not
  so hard to implement syntax and logic checking for the text file.
 
  Please give your opinions about this.
 
  Vladimir Kozhukalov
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Restore OSD devices with Puppet Ceph module

2015-07-22 Thread Oleg Gelbukh
Greetings,

While working on upgrade of OpenStack with Fuel installer, I meet a
requirement to re-add OSD devices with the existing data set to a Ceph
cluster using Puppet module. Node is reinstalled during the upgrade, thus
disks used for OSDs are not mounted at Puppet runtime.

Current version of Ceph module in fuel-library only supports addition of
new OSD devices. Mounted devices are skipped. Not mounted devices with Ceph
UUID in GPT label are passed to 'ceph-deploy osd prepare' command that
formats the device, recreates file system and all existing data is lost.

I proposed a patch to allow support for OSD devices with existing data set:
https://review.openstack.org/#/c/203639/2

However, this fix is very straightforward and doesn't account for different
corner cases, as was pointed out by Mykola Golub in review. As this problem
seems rather significant to me, I'd like to bring this discussion to the
broader audience.

So, here's the comment with my replies inline:

I am not sure just reactivating disks that have a filesystem is a safe
approach:

1) If you are deploying a mix of new and restored disks you may end up with
confiicting OSDs joining the cluster with the same ID. 2) It makes sense to
restore OSDs only if a monitor (cluster) is restored, otherwise activation
of old OSDs will fail. 3) It might happen that the partition contains a
valid filesystem by accident (e.g. the user reused disk/hosts from another
cluster) -- it will not join the cluster because wrong fsid and credentials
but the deployment will unexpectedly fail.

1) As far as I can tell, OSD device IDs are assgined by Ceph cluster based
on already existing devices. So, if some ID is stored on the device, either
device with the given ID already exists in the cluster and no other new
device will the same ID, or cluster doesn't know about a device with the
given ID, and that means we already lost the data placement before.
2) This can be fixed by adding a check that ensures that fsid parameter in
ceph.conf on the node and cluster-fsid on the device are equal. Otherwise
the device is treated like a new device, i.e. passed to 'ceph-deploy osd
prepare'.
3) This situation would be covered by previous check, in my understanding.

Is it posible to pass information that the cluster is restored using
partition preservation? Becasue I think a much safer approach is:

1) Pass some flag from the user that we are restoring the cluster 2)
Restore controller (monitor) and abort deployment if it fails. 3) When
deploying osd host, if 'restore' flag is present, skip prepare step and try
only activate for all disks if possible (we might want to ignore activate
error, and continue with other disks so we restore osds as many as possible)

The case I want to support by this change is not restoration of the whole
cluster, but rather support for reinstallation of OSD node's operating
system. For this case, the approach you propose seems actually more correct
than my implementation. For node being reinstalled we do not expect new
devices, but only ones with the existing data set, so we don't need to
specifically check for it, but rather just skip prepare for all devices.

We still need to check that the value of fsid on the disk is consistent
with the cluster's fsid.

Which issues should we anticipate with this kind of approach?

Another question that is still unclear to me is if someone really needs
support for a hybrid use case when the new and existing unmounted OSD
devices are mixed in one OSD node?

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] [FFE] FF Exception request for Env Upgrade feature

2015-07-22 Thread Oleg Gelbukh
Team,

I would like to request an exception from the Feature Freeze for
Environment Upgrade extensions added to the Nailgun API [1]. The Nailgun
side of the feature is implemented in the following CRs:

https://review.openstack.org/#/q/status:open+topic:bp/nailgun-api-env-upgrade-extensions,n,z

These changes are implemented as an extension [2] to the Nailgun. It barely
touches the core code and doesn't change the existing functionality.

Please, respond if you have any questions or concerns related to this
request.

Thanks in advance.

[1] https://review.openstack.org/#/c/192551/
[2] https://review.openstack.org/#/q/topic:bp/volume-manager-refactoring,n,z

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Abandon changesets which hang for a while without updates

2015-07-17 Thread Oleg Gelbukh
Nicely put, Doug, you gave me laughs :)

I can't see how a CR could hang for a month without anyone paying attention
if it worths merging. If this really happens (which I'm not aware of),
auto-abandon definitely won't make things any worse.

--
Best regards,
Oleg Gelbukh

On Fri, Jul 17, 2015 at 6:10 AM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 Just adding an experience from another project, Neutron.

 We had similar debates, and prepping for the long apocalyptic winter of
 changeset death, Kyle decimated the world and ran the abandon script. The
 debates were far more intense than the reality, and my large stockpile of
 Rad-X and Nuka Cola went to waste.

 Every few weeks, I get a few emails of things being abandoned. And if I
 care about something, mine or not, I click through and tap ‘Restore’. If
 one person in the entire community can’t be bothered to click one button,
 I’m not sure how it’d ever be kept up-to-date, much less merge.

 Thanks,
 doug


 On Jul 16, 2015, at 8:36 PM, Dmitry Borodaenko dborodae...@mirantis.com
 wrote:

 I'm with Stanislaw on this one: abandoning reviews just to make numbers
 *look* better will accomplish nothing.

 The only benefit I can see is cleaning up reviews that we *know* don't
 need to be considered, so that it's easier for reviewers to find the
 reviews that still need attention. I don't see this as that much of a
 problem, finding stuff to review in Fuel Review Inbox [0] is not hard at
 all.

 [0] https://wiki.openstack.org/wiki/Fuel#Development_related_links

 And the state of our review backlog is such that it's not safe to
 auto-abandon reviews without looking at them, and if a contributor has
 spent time looking at a review, abandoning it manually is one click away.

 If we do go with setting up an auto-abandon rule, it should be extremely
 conservative, for example: CR has a negative vote from a core reviewer AND
 there were no comments or positive votes from anyone after that AND it has
 not been touched in any way for 2 months.

 On Wed, Jul 15, 2015 at 5:10 PM Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Folks,
 let's execute here. Numbers are still large. Did we have a chance to look
 over the whole queue?

 Can we go ahead and abandon changes having -1 or -2 from reviewers for
 over than a months or so?
 I'm all for just following standard OpenStack process [1], and then
 change it only if there is good reason for it.

 [1] https://wiki.openstack.org/wiki/Puppet#Patch_abandonment_policy


 On Thu, Jul 9, 2015 at 6:27 PM Stanislaw Bogatkin sbogat...@mirantis.com
 wrote:

 2 weeks seems too small for me. We easy can be in situation when fix for
 medium bug is done, but SCF starts. And gap between SCF and release easily
 can be more than a month. So, 2 months seems okay for me if speaking about
 forcibly applying auto-abandon by major vote. And I'm personally against
 such innovation at all.

 On Thu, Jul 9, 2015 at 5:37 PM, Davanum Srinivas dava...@gmail.com
 wrote:

 That's a very good plan (Initial feedback/triage) Mike.

 thanks,
 dims

 On Thu, Jul 9, 2015 at 3:23 PM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  +1 for just reusing existing script, and adjust it on the way. No
 need to
  immediately switch from infinite time to a couple of weeks, we can
 always
  adjust it later. But 1-2 month should be a good start already.
 
  Our current stats [1] look just terrible. Before we enable an
 auto-abandon,
  we need to go every single patch first, and review it / provide
 comment to
  authors. The idea is not to abandon some good patches, and not to
 offend
  contributors...
 
  Let's think how we can approach it. Should we have core reviewers to
 check
  their corresponding components?
 
  [1] http://stackalytics.com/report/reviews/fuel-group/open
 
  On Wed, Jul 8, 2015 at 1:13 PM Sean M. Collins s...@coreitpro.com
 wrote:
 
  Let's keep it at 4 weeks without comment, and Jenkins failed -
 similar
  to the script that Kyle Mestery uses for Neutron. In fact, we could
  actually just use his script ;)
 
 
 
 https://github.com/openstack/neutron/blob/master/tools/abandon_old_reviews.sh
  --
  Sean M. Collins
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  Mike Scherbakov
  #mihgen
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: https://twitter.com/dims

Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Oleg Gelbukh
Vladimir,

I am fully support moving fuel-upgrade-system into repository of its own.
However, I'm not 100% sure how docker containers are going to appear on the
upgraded master node. Do we have public repository of Docker images
already? Or we are going to build them from scratch during the upgrade?

--
Best regards,
Oleg Gelbukh

On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 By the way, first step for this to happen is to move
  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
 Fortunately, this directory is not the place where the code is continuously
 changing (changes are rather seldom) and moving this project is going to
 barely affect the whole development flow. So, action flow is as follows

 0) patch to openstack-infra for creating new repository (workflow -1)
 1) patch to Fuel CI to create verify jobs
 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
 3) create upstream repository which is to be sucked in by openstack infra
 4) patch to openstack-infra for creating new repository (workflow +1)
 5) patch with rpm spec for fuel-upgrade package and other infrastructure
 files like run_tests.sh
 6) patch to perestroika to build fuel-upgrade package from new repo
 7) patch to fuel-main to remove upgrade tarball
 8) patch to Fuel CI to remove upgrade tarball
 9) patch to fuel-web to remove fuel_upgrade_system directory



 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Dear colleagues,

 I'd like to suggest to get rid of Fuel upgrade tarball and convert this
 thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
 based upgrades, it seems we can stop packaging rpm/deb repositories and
 docker containers into tarball and instead package upgrade python script
 into rpm. It's gonna decrease the complexity of build process as well as
 make it a little bit faster.

 What do you think of this?


 Vladimir Kozhukalov



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Oleg Gelbukh
Vladimir,

Thank you, now it sounds concieving.

My understanding that at the moment all Docker images used by Fuel are
packaged in single RPM? Do you plan to split individual images into
separate RPMs?

Did you think about publishing those images to Dockerhub?

--
Best regards,
Oleg Gelbukh

On Thu, Jul 16, 2015 at 1:50 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Oleg,

 All docker containers currently are distributed as rpm packages. A little
 bit surprising, isn't it? But it works and we can easily deliver updates
 using this old plain rpm based mechanism. The package in 6.1GA is called
 fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would be like this
 0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
 1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
 2) fuel-upgrade package has all other packages (docker, bootstrap image,
 target images, puppet modules) as its dependencies
 3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs all
 necessary actions like moving files, run new containers, upload fixtures
 into nailgun via REST API.

 It is necessary to note that we are talking here about Fuel master node
 upgrades, not about Openstack cluster upgrades (which is the feature you
 are working on).

 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Vladimir,

 I am fully support moving fuel-upgrade-system into repository of its own.
 However, I'm not 100% sure how docker containers are going to appear on the
 upgraded master node. Do we have public repository of Docker images
 already? Or we are going to build them from scratch during the upgrade?

 --
 Best regards,
 Oleg Gelbukh

 On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 By the way, first step for this to happen is to move
  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
 Fortunately, this directory is not the place where the code is continuously
 changing (changes are rather seldom) and moving this project is going to
 barely affect the whole development flow. So, action flow is as follows

 0) patch to openstack-infra for creating new repository (workflow -1)
 1) patch to Fuel CI to create verify jobs
 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
 3) create upstream repository which is to be sucked in by openstack infra
 4) patch to openstack-infra for creating new repository (workflow +1)
 5) patch with rpm spec for fuel-upgrade package and other infrastructure
 files like run_tests.sh
 6) patch to perestroika to build fuel-upgrade package from new repo
 7) patch to fuel-main to remove upgrade tarball
 8) patch to Fuel CI to remove upgrade tarball
 9) patch to fuel-web to remove fuel_upgrade_system directory



 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Dear colleagues,

 I'd like to suggest to get rid of Fuel upgrade tarball and convert this
 thing into fuel-upgrade rpm package. Since we've switched to online rpm/deb
 based upgrades, it seems we can stop packaging rpm/deb repositories and
 docker containers into tarball and instead package upgrade python script
 into rpm. It's gonna decrease the complexity of build process as well as
 make it a little bit faster.

 What do you think of this?


 Vladimir Kozhukalov




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of upgrade tarball

2015-07-16 Thread Oleg Gelbukh
Vladimir,

Good, thank you for extended answer.

--
Best regards,
Oleg Gelbukh

On Thu, Jul 16, 2015 at 3:30 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Oleg,

 Yes, you are right. At the moment all docker containers are packaged into
 a single rpm package. Yes, it would be great to split them into several
 one-by-one rpms, but it is not my current priority. I'll definitely think
 of this when I'll be moving so called late packages (which depend on
 other packages) into perestroika. Yet another thing is that eventually
 all those packages and containers will be artifacts and will be treated
 differently according to their nature. That will be the time when we'll be
 thinking of a docker registry and other stuff like this.






 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 2:58 PM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Vladimir,

 Thank you, now it sounds concieving.

 My understanding that at the moment all Docker images used by Fuel are
 packaged in single RPM? Do you plan to split individual images into
 separate RPMs?

 Did you think about publishing those images to Dockerhub?

 --
 Best regards,
 Oleg Gelbukh

 On Thu, Jul 16, 2015 at 1:50 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Oleg,

 All docker containers currently are distributed as rpm packages. A
 little bit surprising, isn't it? But it works and we can easily deliver
 updates using this old plain rpm based mechanism. The package in 6.1GA is
 called fuel-docker-images-6.1.0-1.x86_64.rpm So, upgrade flow would be like
 this
 0) add new (say 7.0) repository into /etc/yum.repos.d/some.repo
 1) install fuel-upgrade package (yum install fuel-upgrade-7.0)
 2) fuel-upgrade package has all other packages (docker, bootstrap image,
 target images, puppet modules) as its dependencies
 3) run fuel-upgrade script (say /usr/bin/fuel-upgrade) and it performs
 all necessary actions like moving files, run new containers, upload
 fixtures into nailgun via REST API.

 It is necessary to note that we are talking here about Fuel master node
 upgrades, not about Openstack cluster upgrades (which is the feature you
 are working on).

 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 1:22 PM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Vladimir,

 I am fully support moving fuel-upgrade-system into repository of its
 own. However, I'm not 100% sure how docker containers are going to appear
 on the upgraded master node. Do we have public repository of Docker images
 already? Or we are going to build them from scratch during the upgrade?

 --
 Best regards,
 Oleg Gelbukh

 On Thu, Jul 16, 2015 at 11:46 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 By the way, first step for this to happen is to move
  stackforge/fuel-web/fuel_upgrade_system into a separate repository.
 Fortunately, this directory is not the place where the code is 
 continuously
 changing (changes are rather seldom) and moving this project is going to
 barely affect the whole development flow. So, action flow is as follows

 0) patch to openstack-infra for creating new repository (workflow -1)
 1) patch to Fuel CI to create verify jobs
 2) freeze stackforge/fuel-web/fuel_upgrade_system directory
 3) create upstream repository which is to be sucked in by openstack
 infra
 4) patch to openstack-infra for creating new repository (workflow +1)
 5) patch with rpm spec for fuel-upgrade package and other
 infrastructure files like run_tests.sh
 6) patch to perestroika to build fuel-upgrade package from new repo
 7) patch to fuel-main to remove upgrade tarball
 8) patch to Fuel CI to remove upgrade tarball
 9) patch to fuel-web to remove fuel_upgrade_system directory



 Vladimir Kozhukalov

 On Thu, Jul 16, 2015 at 11:13 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Dear colleagues,

 I'd like to suggest to get rid of Fuel upgrade tarball and convert
 this thing into fuel-upgrade rpm package. Since we've switched to online
 rpm/deb based upgrades, it seems we can stop packaging rpm/deb 
 repositories
 and docker containers into tarball and instead package upgrade python
 script into rpm. It's gonna decrease the complexity of build process as
 well as make it a little bit faster.

 What do you think of this?


 Vladimir Kozhukalov




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe

Re: [openstack-dev] [Fuel] Separate repo for Fuel Agent

2015-07-15 Thread Oleg Gelbukh
Nice work, Vladimir. Thank you for pushing this, it's really important step
to decouple things from consolidated repository.

--
Best regards,
Oleg Gelbukh

On Wed, Jul 15, 2015 at 6:47 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 I'm glad to announce that everything about this task is done. ISO build
 job uses this new repository [1]. BVT is green. Fuel Agent rpm spec has
 been moved to the new repo and perestroika has also been switched to build
 fuel-agent package from the new repo. The only difference that could
 potentially affect deployment is that fuel-agent package built from the new
 repo will have lower version because the number or commits in the new repo
 is around 130 vs 7275 in fuel-web (fuel-agent-7.0.0-1.mos7275.noarch.rpm).
 But I believe it gonna be fine until there are more than one fuel-agent
 packages in rpm repository.

 Next step is to remove stackforge/fuel-web/fuel_agent directory.


 [1] https://github.com/stackforge/fuel-agent.git

 Vladimir Kozhukalov

 On Wed, Jul 15, 2015 at 2:19 AM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Thanks Vladimir. Let's ensure to get it done sooner than later (this
 might require to be tested in custom ISO..) - we are approaching FF, and I
 expect growing queues of patches to land...

 On Tue, Jul 14, 2015 at 6:03 AM Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Dear colleagues,

 New repository [1] has been created. So, please port all your review
 requests to stackforge/fuel-web related to Fuel Agent to this new
 repository. Currently, I am testing these two patches
 https://review.openstack.org/#/c/200595
 https://review.openstack.org/#/c/200025. If they work, we need to merge
 them and that is it. Review is welcome.



 [1] https://github.com/stackforge/fuel-agent.git

 Vladimir Kozhukalov

 On Fri, Jul 10, 2015 at 8:14 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Ok, guys.

 Looks like there are no any objections. At the moment I need to create
 actual version of upstream repository which is going to be sucked in by
 OpenStack Infra. Please, be informed that all patches changing
 fuel-web/fuel_agent that will be merged after this moment will need to be
 ported into the new fuel-agent repository.


 Vladimir Kozhukalov

 On Fri, Jul 10, 2015 at 6:38 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys, we are next to moving fuel_agent directory into a separate
 repository. Action flow is going to be as follows:

 1) Create verify jobs on our CI https://review.fuel-infra.org/#/c/9186
 (DONE)
 2) Freeze fuel_agent directory in
 https://github.com/stackforge/fuel-web (will announce in a separate
 mail thread). That means we stop merging patches into master which change
 fuel_agent directory. Unfortunately, all review requests need to be
 re-sent, but it is not going to be very difficult.
 3) Create temporary upstream repository with fuel_agent/* as a
 content. I'm not planning to move 5.x and 6.x branches. Only master. So,
 all fixes for 5.x and 6.x will be living in fuel-web.
 4) This upstream repository is going to be sucked in by
 openstack-infra. Patch is here
 https://review.openstack.org/#/c/199178/ (review is welcome) I don't
 know how long it is going to take. Will try to poke infra people to do 
 this
 today.
 5) Then we need to accept two patches into new fuel-agent repository:
  - rpm spec (extraction from fuel-web/specs/nailgun.spec) (ready, but
 there is no review request)
  - run_tests.sh (to run tests) (ready, but there is no review request)

 !!! By this moment there won't be any impact on ISO build process !!!

 6) Then we need to change two things at the same time (review is
 welcome)
   - fuel-web/specs/nailgun.spec in order to prevent fuel-agent package
 building  https://review.openstack.org/#/c/200595
   - fuel-main so as to introduce new fuel-agent repository into the
 build process https://review.openstack.org/#/c/200025

 And good luck to me -)


 Vladimir Kozhukalov

 On Wed, Jul 8, 2015 at 12:53 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 There were some questions from Alexandra Fedorova about independent
 release cycle.

 according to the configuration [1] Infra team won't be able to do
 branching or any kind of release management for new repository.

 Could you please clarify, do we plan to version new repository the
 same way as we do for main fuel repositories or there going to be
 separate releases as in python-fuelclient [2]? Who should drive the
 release process for this repo and how this change will affect Fuel
 ISO
 release?

 [1]
 https://review.openstack.org/#/c/199178/1/gerrit/acls/stackforge/fuel-agent.config,cm
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2015-July/068837.html

 IMO all Fuel components should be as much independent as possible
 with highly defined APIs used for their interaction, with their own 
 teams,
 with their own independent release cycles. But we cannot switch to this
 model immediately

Re: [openstack-dev] [Fuel][Fuel-Packaging][Fuel-CI][Fuel-Infra] How to Deal with CI Failures Induced by Libraries Requirements Drift

2015-07-13 Thread Oleg Gelbukh
Vladimir,

The failures you are referring to is purely test-related failures. They
don't affect the code in production in any way, as far as I can see. All
the same, production code won't be affected by pinning versions of
test-requirements in the stable/* branches of the product and test suits.


-Oleg

On Mon, Jul 13, 2015 at 12:34 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Oleg

 The problem here is that you have this code released and it is running in
 production - how are you going to fix this? Pin requirements and deal with
 dependency hell?
 Seriously, it is much easier to deal with explicitly frozen mirror which
 is created by one 'pip install ' run than to play with implicit transitive
 dependencies.

 On Mon, Jul 13, 2015 at 11:58 AM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Some comments inline.

 On Mon, Jul 13, 2015 at 9:24 AM, Bartlomiej Piotrowski 
 bpiotrow...@mirantis.com wrote:

 Freezing every moving part is complete overkill and puts a heavy burden
 on devops
 team as well as infra itself. The fix couldn't be more simple: just put
 upper
 bounds in requirements.

  1) if there is a new conflicting version, you need to set this
 upper-bound, thus you need to modify bits which get released
 It should be done as part of hard code freeze.


 As I understand, in this cases it is not code dependencies that cause
 misfunction, but dependencies of tests. This can be fixed by pinning
 test-requirements. We can do this any time, as it does not affect users.



  2) you are actually testing your code by linking it with libraries
 which are different from those that users are really using when running
 your code
 Packages dependencies should reflect these set in requirements.

  3) even if you specify an upper bound (or even fix the version) for
 this particular library, you may still fetch its newer dependency
 implicitly (by traversing indirect dependencies) with which you will be
 linking your libraries and which will actually be different from the code
 that you (and your users) use
 This can be actually said about anything, including base system Fuel is
 running. We simply do not support such setups.


 That's why we should run CI and nightly builds on stable branches. It
 catches exactly this type of issues.




  4) you may also break production installation if you fix some library
 version as it may not exist in the code bundle which gets delivered to your
 users as a set of package
 See 2.


 Again, if something in code deps breaks our stable branch, we must learn
 it as soon as possible and fix it there. However, in this case it ist the
 test requirements failure, and it should pinned in 'test-requirements.txt'
 or in requirements of our test suits.

 --
 Best regards,
 Oleg Gelbukh



 BP


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Packaging][Fuel-CI][Fuel-Infra] How to Deal with CI Failures Induced by Libraries Requirements Drift

2015-07-13 Thread Oleg Gelbukh
Some comments inline.

On Mon, Jul 13, 2015 at 9:24 AM, Bartlomiej Piotrowski 
bpiotrow...@mirantis.com wrote:

 Freezing every moving part is complete overkill and puts a heavy burden on
 devops
 team as well as infra itself. The fix couldn't be more simple: just put
 upper
 bounds in requirements.

  1) if there is a new conflicting version, you need to set this
 upper-bound, thus you need to modify bits which get released
 It should be done as part of hard code freeze.


As I understand, in this cases it is not code dependencies that cause
misfunction, but dependencies of tests. This can be fixed by pinning
test-requirements. We can do this any time, as it does not affect users.



  2) you are actually testing your code by linking it with libraries which
 are different from those that users are really using when running your code
 Packages dependencies should reflect these set in requirements.

  3) even if you specify an upper bound (or even fix the version) for this
 particular library, you may still fetch its newer dependency implicitly (by
 traversing indirect dependencies) with which you will be linking your
 libraries and which will actually be different from the code that you (and
 your users) use
 This can be actually said about anything, including base system Fuel is
 running. We simply do not support such setups.


That's why we should run CI and nightly builds on stable branches. It
catches exactly this type of issues.




  4) you may also break production installation if you fix some library
 version as it may not exist in the code bundle which gets delivered to your
 users as a set of package
 See 2.


Again, if something in code deps breaks our stable branch, we must learn it
as soon as possible and fix it there. However, in this case it ist the test
requirements failure, and it should pinned in 'test-requirements.txt' or in
requirements of our test suits.

--
Best regards,
Oleg Gelbukh



 BP

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Introducing bash unit testing

2015-07-09 Thread Oleg Gelbukh
And I realized all of a sudden that even more interesting than unittest
framework itself would be some analog of Python mock for shell scripts.
Though I doubt that anyone ever really gone that far.

--
Best regards,
Oleg Gelbukh

On Thu, Jul 9, 2015 at 5:12 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-07-09 11:24:14 +0300 (+0300), Matthew Mosesohn wrote:
  What about bashate? It is already in use in several OpenStack projects?
 [...]

 It's a style checker (its name is an allusion to the pep8 tool for
 checking Python coding style conventions). And shellcheck (mentioned
 later in this thread) is a static code analysis tool for shell
 scripts (analogous to pylint/pyflakes in the Python world). Both
 potentially useful tools for helping you maintain code quality on
 shell script based projects, but neither of these is a unit testing
 framework.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-24 Thread Oleg Gelbukh
Please, note the blueprint I created for the documentation feature [1].
There is also a linked blueprint to create a configuration reference
document [2]. I targeted the blueprint to 'future', as we most likely won't
be able to fit it into 7.0 cycle.

I will start working on spec for the reference doc and will try to describe
how attributes should be documented. I'll highly appreciate and welcome any
guidance from Fuel documentation team.

[1] https://blueprints.launchpad.net/fuel/+spec/api-parameters-reference
[2] https://blueprints.launchpad.net/fuel/+spec/config-file-reference
--
Best regards,
Oleg Gelbukh

On Wed, Jun 17, 2015 at 1:03 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:

 As this topic is getting some traction, I will register corresponding
 blueprint in Fuel and try to decompose the work based on what Andrew
 proposed.

 --
 Best regards,
 Oleg Gelbukh

 On Tue, Jun 16, 2015 at 3:54 PM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Andrew,

 I've also noticed that incompatible changes are being introduced in JSON
 schemas for different objects in almost every release. I hope that explicit
 reference that lists and explains all parameters will discourage such
 modifications, or at least will increase their visibility and allow to
 understand justifications for them.

 --
 Best regards,
 Oleg Gelbukh

 On Mon, Jun 15, 2015 at 4:21 PM, Andrew Woodward awoodw...@mirantis.com
 wrote:

 I think there is some desire to see more documentation around here as
 there are some odd interactions with parts of the data payload, and perhaps
 documenting these may improve some of them.

 I think the gaps in order of most used are:
 * node object create / update
 * environment networks ( the fact that metadata cant be updated kills me)
 * environment settings (the separate api for hidden and non kills me)
 * release update
 * role add/update

 After these are updated I think we can move on to common but less used
 * node interface assignment
 * node disk assignment



 On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Good day, fellow fuelers

 Fuel API is a powerful tool that allow for very fine tuning of
 deployment settings and parameters, and we all know that UI exposes only a
 fraction of the full range of attributes client can pass to Fuel installer.

 However, there are very little documentation that explains what
 settings are accepted by Fuel objects, what are they meanings and what is
 their syntax. There is a main reference document for API [1], but it does
 give almost no insight into payload of parameters that every entity
 accepts. Which are they and what they for seems to be mostly scattered as a
 tribal knowledge.

 I would like to understand if there is a need in such a document among
 developers and deployers who consume Fuel API? Or might be there is already
 such document or effort to create it going on?

 --
 Best regards,
 Oleg Gelbukh

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 --
 Andrew Woodward
 Mirantis
 Fuel Community Ambassador
 Ceph Community


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-17 Thread Oleg Gelbukh
As this topic is getting some traction, I will register corresponding
blueprint in Fuel and try to decompose the work based on what Andrew
proposed.

--
Best regards,
Oleg Gelbukh

On Tue, Jun 16, 2015 at 3:54 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:

 Andrew,

 I've also noticed that incompatible changes are being introduced in JSON
 schemas for different objects in almost every release. I hope that explicit
 reference that lists and explains all parameters will discourage such
 modifications, or at least will increase their visibility and allow to
 understand justifications for them.

 --
 Best regards,
 Oleg Gelbukh

 On Mon, Jun 15, 2015 at 4:21 PM, Andrew Woodward awoodw...@mirantis.com
 wrote:

 I think there is some desire to see more documentation around here as
 there are some odd interactions with parts of the data payload, and perhaps
 documenting these may improve some of them.

 I think the gaps in order of most used are:
 * node object create / update
 * environment networks ( the fact that metadata cant be updated kills me)
 * environment settings (the separate api for hidden and non kills me)
 * release update
 * role add/update

 After these are updated I think we can move on to common but less used
 * node interface assignment
 * node disk assignment



 On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Good day, fellow fuelers

 Fuel API is a powerful tool that allow for very fine tuning of
 deployment settings and parameters, and we all know that UI exposes only a
 fraction of the full range of attributes client can pass to Fuel installer.

 However, there are very little documentation that explains what settings
 are accepted by Fuel objects, what are they meanings and what is their
 syntax. There is a main reference document for API [1], but it does give
 almost no insight into payload of parameters that every entity accepts.
 Which are they and what they for seems to be mostly scattered as a tribal
 knowledge.

 I would like to understand if there is a need in such a document among
 developers and deployers who consume Fuel API? Or might be there is already
 such document or effort to create it going on?

 --
 Best regards,
 Oleg Gelbukh

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 --
 Andrew Woodward
 Mirantis
 Fuel Community Ambassador
 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-16 Thread Oleg Gelbukh
Andrew,

I've also noticed that incompatible changes are being introduced in JSON
schemas for different objects in almost every release. I hope that explicit
reference that lists and explains all parameters will discourage such
modifications, or at least will increase their visibility and allow to
understand justifications for them.

--
Best regards,
Oleg Gelbukh

On Mon, Jun 15, 2015 at 4:21 PM, Andrew Woodward awoodw...@mirantis.com
wrote:

 I think there is some desire to see more documentation around here as
 there are some odd interactions with parts of the data payload, and perhaps
 documenting these may improve some of them.

 I think the gaps in order of most used are:
 * node object create / update
 * environment networks ( the fact that metadata cant be updated kills me)
 * environment settings (the separate api for hidden and non kills me)
 * release update
 * role add/update

 After these are updated I think we can move on to common but less used
 * node interface assignment
 * node disk assignment



 On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Good day, fellow fuelers

 Fuel API is a powerful tool that allow for very fine tuning of deployment
 settings and parameters, and we all know that UI exposes only a fraction of
 the full range of attributes client can pass to Fuel installer.

 However, there are very little documentation that explains what settings
 are accepted by Fuel objects, what are they meanings and what is their
 syntax. There is a main reference document for API [1], but it does give
 almost no insight into payload of parameters that every entity accepts.
 Which are they and what they for seems to be mostly scattered as a tribal
 knowledge.

 I would like to understand if there is a need in such a document among
 developers and deployers who consume Fuel API? Or might be there is already
 such document or effort to create it going on?

 --
 Best regards,
 Oleg Gelbukh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 --
 Andrew Woodward
 Mirantis
 Fuel Community Ambassador
 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Fuel API settings reference

2015-06-15 Thread Oleg Gelbukh
Good day, fellow fuelers

Fuel API is a powerful tool that allow for very fine tuning of deployment
settings and parameters, and we all know that UI exposes only a fraction of
the full range of attributes client can pass to Fuel installer.

However, there are very little documentation that explains what settings
are accepted by Fuel objects, what are they meanings and what is their
syntax. There is a main reference document for API [1], but it does give
almost no insight into payload of parameters that every entity accepts.
Which are they and what they for seems to be mostly scattered as a tribal
knowledge.

I would like to understand if there is a need in such a document among
developers and deployers who consume Fuel API? Or might be there is already
such document or effort to create it going on?

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Setting cluster status when provisioning a node

2015-05-27 Thread Oleg Gelbukh
Excellent, nice to know that we're on the same page about this.

Thank you!

--
Best regards,
Oleg Gelbukh

On Wed, May 27, 2015 at 12:22 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Oleg,

 Thanks for the feedback. I have the following as a response:

 1. This spec is just an excerpt for scoping in the proposed improvement to
 the 7.0 release plan. If it get’s scope the full specification will go
 through a standard review process so it will be possible to discuss names
 along with the rest of details then.

 2. It’s already noticed in the spec the status is is generated using an
 aggregate query like you described so I don’t propose to store it. Storing
 that data will require sophisticated algorithms to work with it and also
 will lead to more locks or race conditions in the database. So yes, it’s
 going to be a method.


 - romcheg


 27 трав. 2015 о 08:19 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 This looks like a great solution to me, and I like your proposal very
 much. The status of cluster derived directly from statuses of nodes is
 exactly what I was thinking about.

 I have to notes to the proposal, and I can copy them to etherpad if you
 think they deserve it:

 1) status name 'operational' seem a bit unclear to me, as it sounds more
 like something Monitoring should report: it implies that the actual
 OpenStack environment is operational, which might or might not be a case,
 and Fuel has no way to tell. I would really prefer if that status name was
 'Deployed' or something along those lines.

 2) I'm not sure if we need to keep the complex status of the cluster
 explicitly in 'cluster' table in the format you suggest. This information
 can be taken directly from 'nodes' table in Nailgun DB. For example,
 getting it in the second form you propose is as simple as:

 nailgun= SELECT status,count(status) FROM nodes GROUP BY status;
 discover|1
 ready|5

 What do you think about making it a method rather then an element of data
 model? Or that's exactly the complexity you want to get rid of?

 --
 Best regards,
 Oleg Gelbukh


 On Tue, May 26, 2015 at 4:16 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Oleg,

 Aleksander also proposed a nice proposed a nice solution [1] which is to
 have a complex status for cluster. That, however, looks like a BP so I’ve
 created an excerpt [2] for it and we will try to discuss it scope it for
 7.0, if there is a consensus.


 References:

 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-May/064670.html
 2. https://etherpad.openstack.org/p/fuel-cluster-complex-status


 - romcheg

 22 трав. 2015 о 22:32 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 I'm totally for fixing Nailgun. However, the status of environment is not
 simply function of statuses of nodes in it. Ideally, it should depend on
 whether appropriate number of nodes of certain roles are in 'ready' status.
 For the meantime, it would be enough if environment was set to
 'operational' when all nodes in it become 'ready', no matter how they were
 deployed (i.e. via Web UI or CLI).

 --
 Best regards,
 Oleg Gelbukh

 On Fri, May 22, 2015 at 5:33 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Hi folks!

 Recently I encountered an issue [1] that the Deploy Changes button in
 the web ui is still active when a provisioning of single node is started
 using the command line client.
 The background for that issue is that the provisioning task does not
 seem to update the cluster status correctly and Nailgun’s API returns it as
 NEW even while some of the node are been provisioned.

 The reason for raising this thread in the mailing list is that
 provisioning a node is a feature for developers and basically end-users
 should not do that. What is the best solution for that: fix Nailgun to set
 the correct status, or make this provisioning feature available only for
 developers?

 1. https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086


 - romcheg



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Fuel] Setting cluster status when provisioning a node

2015-05-27 Thread Oleg Gelbukh
Roman,

This looks like a great solution to me, and I like your proposal very much.
The status of cluster derived directly from statuses of nodes is exactly
what I was thinking about.

I have to notes to the proposal, and I can copy them to etherpad if you
think they deserve it:

1) status name 'operational' seem a bit unclear to me, as it sounds more
like something Monitoring should report: it implies that the actual
OpenStack environment is operational, which might or might not be a case,
and Fuel has no way to tell. I would really prefer if that status name was
'Deployed' or something along those lines.

2) I'm not sure if we need to keep the complex status of the cluster
explicitly in 'cluster' table in the format you suggest. This information
can be taken directly from 'nodes' table in Nailgun DB. For example,
getting it in the second form you propose is as simple as:

nailgun= SELECT status,count(status) FROM nodes GROUP BY status;
discover|1
ready|5

What do you think about making it a method rather then an element of data
model? Or that's exactly the complexity you want to get rid of?

--
Best regards,
Oleg Gelbukh


On Tue, May 26, 2015 at 4:16 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Oleg,

 Aleksander also proposed a nice proposed a nice solution [1] which is to
 have a complex status for cluster. That, however, looks like a BP so I’ve
 created an excerpt [2] for it and we will try to discuss it scope it for
 7.0, if there is a consensus.


 References:

 1. http://lists.openstack.org/pipermail/openstack-dev/2015-May/064670.html
 2. https://etherpad.openstack.org/p/fuel-cluster-complex-status


 - romcheg

 22 трав. 2015 о 22:32 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 I'm totally for fixing Nailgun. However, the status of environment is not
 simply function of statuses of nodes in it. Ideally, it should depend on
 whether appropriate number of nodes of certain roles are in 'ready' status.
 For the meantime, it would be enough if environment was set to
 'operational' when all nodes in it become 'ready', no matter how they were
 deployed (i.e. via Web UI or CLI).

 --
 Best regards,
 Oleg Gelbukh

 On Fri, May 22, 2015 at 5:33 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Hi folks!

 Recently I encountered an issue [1] that the Deploy Changes button in the
 web ui is still active when a provisioning of single node is started using
 the command line client.
 The background for that issue is that the provisioning task does not seem
 to update the cluster status correctly and Nailgun’s API returns it as NEW
 even while some of the node are been provisioned.

 The reason for raising this thread in the mailing list is that
 provisioning a node is a feature for developers and basically end-users
 should not do that. What is the best solution for that: fix Nailgun to set
 the correct status, or make this provisioning feature available only for
 developers?

 1. https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Setting cluster status when provisioning a node

2015-05-22 Thread Oleg Gelbukh
Roman,

I'm totally for fixing Nailgun. However, the status of environment is not
simply function of statuses of nodes in it. Ideally, it should depend on
whether appropriate number of nodes of certain roles are in 'ready' status.
For the meantime, it would be enough if environment was set to
'operational' when all nodes in it become 'ready', no matter how they were
deployed (i.e. via Web UI or CLI).

--
Best regards,
Oleg Gelbukh

On Fri, May 22, 2015 at 5:33 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks!

 Recently I encountered an issue [1] that the Deploy Changes button in the
 web ui is still active when a provisioning of single node is started using
 the command line client.
 The background for that issue is that the provisioning task does not seem
 to update the cluster status correctly and Nailgun’s API returns it as NEW
 even while some of the node are been provisioned.

 The reason for raising this thread in the mailing list is that
 provisioning a node is a feature for developers and basically end-users
 should not do that. What is the best solution for that: fix Nailgun to set
 the correct status, or make this provisioning feature available only for
 developers?

 1. https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] python-fuelclient 6.1.0 is released

2015-05-19 Thread Oleg Gelbukh
Roman,

This is awesome news! Thank you for this huge improvement for developers
who consume Fuel API.

Could you please elaborate on backwards compatibility between the new
client and older versions of Fuel API? For example, is it possible to use
the new client to work with Fuel 4.x? 5.x?

--
Best regards,
Oleg Gelbukh

On Fri, May 15, 2015 at 5:39 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks!

 I’m glad to announce that the first independent release of Fuel Client was
 published to PyPi: https://pypi.python.org/pypi/python-fuelclient
 You can either download it from the web page or install with pip install
 python-fuelclient.

 What’s new:

  - Fuel client is now able to run most of it’s features remotely from
 Fuel’s master node.
  - Old CLI was deprecated, new fuel2 utility is a preview of the new Fuel
 client which will be available in the next major release
  - Versioning scheme of the Fuel Client is not tightly bound to Fuel’s
 version scheme anymore.
  - Other improvements and bug-fixes


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-20 Thread Oleg Gelbukh
Aleksey,

Thank you for clarification. Personally, I'm more interested in IP-based
display/grouping/filtering of deployed nodes.

And yes, it would be super-useful to have filtering in back-end and API,
not only in UI.

--
Best regards,
Oleg

On Fri, Feb 20, 2015 at 12:30 PM, Aleksey Kasatkin akasat...@mirantis.com
wrote:

 Oleg, The problem with IP addresses (for all networks but admin-pxe) is
 that they are not available until deployment is started or
 /clusters/(?Pcluster_id\d+)/orchestrator/deployment/defaults/ is called.
 Nailgun just doesn't allocate them in advance. It was discussed some time
 before (
 https://blueprints.launchpad.net/fuel/+spec/assign-ips-on-nodes-addition
 ) but not planned yet. There is no problem with admin-pxe addresses though.

 I agree that filtering is better be done in backend but it seems that it
 will not be done recently. AFAIC, it will not be 6.1.
 We didn't even decide what to do with API versioning yet.


 Aleksey Kasatkin


 On Thu, Feb 19, 2015 at 12:05 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 I think all these operations for nodes (grouping, sorting, filtering) can
 be done on the backend, but we can do it completely on the UI side and
 shouldn't wait for backend implementation. We can switch to it after it
 becomes available.

 2015-02-17 19:44 GMT+07:00 Sergey Vasilenko svasile...@mirantis.com:

 +1, sorting is should be...

 Paginating may be too, but not activated by default.


 /sv




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-16 Thread Oleg Gelbukh
Julia,

It would be nice to add grouping by Status to the existing 'Grouping'
dropdown. It would save some time finding faulty/offline nodes in the list
and performing bulk actions (like Delete) on them.

Another useful feature for large deployments would be an ability to see IP
addresses of nodes (including Management and Public addresses) in the UI
and group/sort by those addresses.

--
Best regards,
Oleg Gelbukh
Mirantis Labs

On Sat, Feb 14, 2015 at 11:27 AM, Julia Aranovich jkirnos...@mirantis.com
wrote:

 Hi All,

 Currently we [Fuel UI team] are planning the features of *sorting and
 filtering of node list *to introduce it in 6.1 release.

 Now user can filter nodes just by it's name or MAC address and no sorters
 are available. It's rather poor UI for managing 200+ nodes environment. So,
 the current suggestion is to filter and sort nodes by the following
 parameters:

1. name
2. manufacturer
3. IP address
4. MAC address
5. CPU
6. memory
7. disks total size (we need to think about less than/more than
representation)
8. interfaces speed
9. status (Ready, Pending Addition, Error, etc.)
10. roles


 It will be a form-based filter. Items [1-4] should go to a single text
 input and other go to a separate controls.
 And also there is an idea to translate a user filter selection to a query
 and add it to a location string. Like it's done for the logs search:
 *#cluster/x/logs/type:local;source:api;level:info*.

 Please also note, that the changes we are thinking about should not affect
 backend code.


 I will be very grateful if you share your ideas about this or tell some of
 the cases that would be useful to you at work with real deployments.
 We would like to introduce really usefull tools based on your feedback.


 Best regards,
 Julia

 --
 Kind Regards,
 Julia Aranovich,
 Software Engineer,
 Mirantis, Inc
 +7 (905) 388-82-61 (cell)
 Skype: juliakirnosova
 www.mirantis.ru
 jaranov...@mirantis.com jkirnos...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of kickstart/preseed for all NEW releases

2015-01-27 Thread Oleg Gelbukh
Gentlemen,

I have one small question about IBP, and I'm not sure if this is the right
place to ask, but still: how do you plan to build the images for the
image-based provisioning? Will you leverage diskimage-builder
https://github.com/openstack/diskimage-builder or some other tool?

Thanks,


--
Best regards,
Oleg Gelbukh
Mirantis Labs

On Tue, Jan 27, 2015 at 10:55 PM, Andrew Woodward xar...@gmail.com wrote:

 I don't see this as crazy, it's not a feature of the cloud, its a
 mechanism to get us there. It's not even something that most of the
 time anyone sees. Continuing to waste time supporting something we are
 ready to replace, and have been testing for a release already is
 crazy. It adds to the technical debt around provisioning that is
 broken a chlot of the time. We spend around 11% of all commits of
 fuel-library to cobbler, templates, pmanager etc

 It's also not removing it, it will continue to be present to support
 prior releases, so it's even still available if we cant make IBP work
 the way we need to.

 On Tue, Jan 27, 2015 at 2:23 AM, Vladimir Kozhukalov
 vkozhuka...@mirantis.com wrote:
  Guys,
 
  First, we are not talking about deliberate disabling preseed based
 approach
  just because we so crazy. The question is What is the best way to
 achieve
  our 6.1 goals? We definitely need to be able to install two versions of
  Ubuntu 12.04 and 14.04. Those versions have different sets of packages
 (for
  example ntp related ones) and we install some of those packages during
  provisioning (we point out which packages we need with their versions).
 To
  make this working with preseed based approach we need either to insert
 some
  IF release==6.1 conditional lines into preseed (not very beautiful,
 isn't
  it?) or to create different Distros and Profiles for different releases.
  Second is not a problem for Cobbler but it is for nailgun/astute because
 we
  do not deal with that stuff and it looks that we cannot implement this
  easily.
 
  IMO, the only options we have are to insert IFs into preseed (I would
 say
  it is not more reliable than IBP) or to refuse preseed approach for ONLY
 NEW
  UPCOMING releases. You can call crazy but for me having a set IFs
  together with pmanager.py which are absolutely difficult to maintain is
  crazy.
 
 
 
  Vladimir Kozhukalov
 
  On Tue, Jan 27, 2015 at 3:03 AM, Andrew Woodward xar...@gmail.com
 wrote:
 
  On Mon, Jan 26, 2015 at 10:47 AM, Sergii Golovatiuk
  sgolovat...@mirantis.com wrote:
   Until we are sure IBP solves operation phase where we need to deliver
   updated packages so client will be able to provision new machines with
   these
   fixed packages, I would leave backward compatibility with normal
   provision.
   ... Just in case.
 
  doesn't running 'apt-get upgrade' or 'yum update' after laying out the
  FS image resolve the gap until we can rebuild the images on the fly?
  
  
  
   --
   Best regards,
   Sergii Golovatiuk,
   Skype #golserge
   IRC #holser
  
   On Mon, Jan 26, 2015 at 4:56 PM, Vladimir Kozhukalov
   vkozhuka...@mirantis.com wrote:
  
   My suggestion is to make IBP the only option available for all
 upcoming
   OpenStack releases which are defined in openstack.yaml. It is to be
   possible
   to install OS using kickstart for all currently available OpenStack
   releases.
  
   Vladimir Kozhukalov
  
   On Mon, Jan 26, 2015 at 6:22 PM, Igor Kalnitsky
   ikalnit...@mirantis.com
   wrote:
  
   Just want to be sure I understand you correctly: do you propose to
   FORBID kickstart/preseed installation way in upcoming release at
 all?
  
   On Mon, Jan 26, 2015 at 3:59 PM, Vladimir Kozhukalov
   vkozhuka...@mirantis.com wrote:
Subject is changed.
   
Vladimir Kozhukalov
   
On Mon, Jan 26, 2015 at 4:55 PM, Vladimir Kozhukalov
vkozhuka...@mirantis.com wrote:
   
Dear Fuelers,
   
As you might know we need it to be possible to install several
versions of
a particular OS (Ubuntu and Centos) by 6.1  As far as having
different
OS
versions also means having different sets of packages and some of
the
packages are installed and configured during provisioning stage,
 we
need to
have a kind of kickstart/preseed version mechanism.
   
Cobbler is exactly such a mechanism. It allows us to have several
Distros
(installer images) and profiles (kickstart/preseed files). But
unfortunately, for some reasons we have not been using those
Cobbler's
capabilities since the beginning of Fuel and it doesn't seem to
 be
easily
introduced into Nailgun to deal with the whole Cobbler life
 cycle.
   
Anyway, we are moving towards IBP (image based provisioning) and
 we
already have different images connected to different OpenStack
releases
(openstack.yaml) and everything else which is necessary for
 initial
node
configuration is serialized inside provision data (including
profile
name
like 'ubuntu_1204' or 'ubuntu_1404

Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-05 Thread Oleg Gelbukh
Hello,

As far as I can tell, disk space monitoring is pretty useless, unless Fuel
provides user with some means to automatically cleanup of stored data (i.e.
remove obsolete diagnostic snapshots, etc). Otherwise, it will be only
useful for experienced Fuel developers who know how to properly cleanup the
Master node.

--
Best regards,
Oleg Gelbukh

On Tue, Nov 4, 2014 at 3:26 PM, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:

 Hello,

 In extension to my comment in this bug [1] I'd like to discuss the
 possibility of adding Fuel master node monitoring. As I wrote in the
 comment, when disk is full it might be already too late to perform any
 action since for example Nailgun could be down because DB shut itself down.
 So we should somehow warn the user that disk is running low (in the UI and
 fuel CLI on stderr for example) before it actually happens.

 For now the only meaningful value to monitor would be disk usage -- do you
 have other suggestions? If not then probably a simple API endpoint with
 statvfs calls would suffice. If you see other usages of this then maybe it
 would be better to have some daemon collecting the stats we want.

 If we opted for a daemon, then I'm aware that the user can optionally
 install Zabbix server although looking at blueprints in [2] I don't see
 anything about monitoring Fuel master itself -- is it possible to do?
 Though the installation of Zabbix though is not mandatory so it still
 doesn't completely solve the problem.

 [1] https://bugs.launchpad.net/fuel/+bug/1371757
 [2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system

 Przemek

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-16 Thread Oleg Gelbukh
On Tue, Jul 15, 2014 at 1:08 PM, Mark McLoughlin mar...@redhat.com wrote:

 Also, this is going to tell you how the API service you connected to was
 configured. Where there are multiple API servers, what about the others?
 How do operators verify all of the API servers behind a load balancer
 with this?

 And in the case of something like Nova, what about the many other nodes
 behind the API server?


A query for configuration could be a part of /hypervisors API extension. It
doesn't solve multiple API servers issue though.

--
Best regards,
Oleg Gelbukh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rubick] Proposal to make py33 job voting for stackforge/rubick

2014-06-30 Thread Oleg Gelbukh
Hi,

To proceed with this, I have sent (presumably) appropriate change to
review: https://review.openstack.org/#/c/103516/

--
Best regards,
Oleg Gelbukh



On Fri, Jun 27, 2014 at 6:40 PM, Jay Pipes jaypi...@gmail.com wrote:

 Sure, why not? :)


 On 06/27/2014 06:25 AM, Oleg Gelbukh wrote:

 Hello,

 As our commits consistently pass py33 tests for last month (although not
 so many changes were made), I propose to enable py33 job voting on
 stackforge/rubick repository.

 What do you think?

 --
 Best regards,
 Oleg Gelbukh


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Using __init__.py files

2014-06-30 Thread Oleg Gelbukh
Renat,

As far as I can tell, it is de-facto standard to not place anything at all
to __init__.py across the majority of OpenStack projects.

--
Best regards,
Oleg Gelbukh


On Mon, Jun 30, 2014 at 3:50 PM, Renat Akhmerov rakhme...@mirantis.com
wrote:

 Hi,

 What would be your opinion on the question “Should we place any important
 functionality into __init__.py files or just use it for package level
 initialization and exporting variables from module level to a package
 level?”.

 I personally would prefer not to keep there anything like class Engine
 (which is one of the most important parts of Mistral now). It’s somewhat
 confusing to me, especially when I navigate through the project structure.
 It’s not a critical urgent thing, of course, but would be nice if you share
 your opinion.

 What do you guys think?

 Renat Akhmerov
 @ Mirantis Inc.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rubick] Proposal to make py33 job voting for stackforge/rubick

2014-06-27 Thread Oleg Gelbukh
Hello,

As our commits consistently pass py33 tests for last month (although not so
many changes were made), I propose to enable py33 job voting on
stackforge/rubick repository.

What do you think?

--
Best regards,
Oleg Gelbukh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-10 Thread Oleg Gelbukh
Hello, Jay,

As a fork of nova-scheduler, Gantt most likely will handle initial
placement. However, even nova-scheduler now supports some runtime
operations (for example, scheduling of evacuated/migrated instances).

Given the runtime scheduling arises in this list regularly, I guess such
features will make their way into Scheduler service eventually.

--
Best regards,
Oleg Gelbukh


On Wed, Apr 9, 2014 at 7:47 PM, Jay Lau jay.lau@gmail.com wrote:

 @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
 placement policy or run time policy or both, can you help clarify?

 @Henrique, not sure if you know IBM PRS (Platform Resource Scheduler) [1],
 we have finished the dynamic scheduler in our Icehouse version (PRS 2.2),
 it has exactly the same feature as your described, we are planning a live
 demo for this feature in Atlanta Summit. I'm also writing some document for
 run time policy which will cover more run time policies for OpenStack, but
 not finished yet. (My shame for the slow progress). The related blueprint
 is [2], you can also get some discussion from [3]

 [1]
 http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=ANsubtype=CAhtmlfid=897/ENUS213-590appname=USN
 [2]
 https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
 [3] http://markmail.org/~jaylau/OpenStack-DRS

 Thanks.


 2014-04-09 23:21 GMT+08:00 Oleg Gelbukh ogelb...@mirantis.com:

 Henrique,

 You should check out Gantt project [1], it could be exactly the place to
 implement such features. It is a generic cross-project Scheduler as a
 Service forked from Nova recently.

 [1] https://github.com/openstack/gantt

 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs


 On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta 
 henriquecostatr...@gmail.com wrote:

 Hello, everyone!

 I am currently a graduate student and member of a group of contributors
 to OpenStack. We believe that a dynamic scheduler could improve the
 efficiency of an OpenStack cloud, either by rebalancing nodes to maximize
 performance or to minimize the number of active hosts, in order to minimize
 energy costs. Therefore, we would like to propose a dynamic scheduling
 mechanism to Nova. The main idea is using the Ceilometer information (e.g.
 RAM, CPU, disk usage) through the ceilometer-client and dinamically decide
 whether a instance should be live migrated.

 This might me done as a Nova periodic task, which will be executed every
 once in a given period or as a new independent project. In both cases, the
 current Nova scheduler will not be affected, since this new scheduler will
 be pluggable. We have done a search and found no such initiative in the
 OpenStack BPs. Outside the community, we found only a recent IBM
 announcement for a similiar feature in one of its cloud products.

 A possible flow is: In the new scheduler, we periodically make a call to
 Nova, get the instance list from a specific host and, for each instance, we
 make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
 cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
 parameters configured by the user, analyze the meters and do the proper
 migrations.

 Do you have any comments or suggestions?

 --
 Ítalo Henrique Costa Truta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-10 Thread Oleg Gelbukh
Andrew,

Thank you for clarification!


On Thu, Apr 10, 2014 at 3:47 PM, Andrew Laski andrew.la...@rackspace.comwrote:


 The scheduler as it currently exists is a placement engine.  There is
 sufficient complexity in the scheduler with just that responsibility so I
 would prefer to see anything that's making runtime decisions separated out.
  Perhaps it could just be another service within the scheduler project once
 it's broken out, but I think it will be beneficial to have a clear
 distinction between placement decisions and runtime monitoring.


Do you think that auto-scaling could be considered another facet of this
'runtime monitoring' functionality? Now it is a combination of Heat and
Ceilometer. Does it worth moving to hypothetical runtime mobility service
as well?

--
Best regards,
Oleg Gelbukh





 --
 Best regards,
 Oleg Gelbukh


 On Wed, Apr 9, 2014 at 7:47 PM, Jay Lau jay.lau@gmail.com wrote:

  @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
 placement policy or run time policy or both, can you help clarify?

 @Henrique, not sure if you know IBM PRS (Platform Resource Scheduler)
 [1],
 we have finished the dynamic scheduler in our Icehouse version (PRS
 2.2),
 it has exactly the same feature as your described, we are planning a live
 demo for this feature in Atlanta Summit. I'm also writing some document
 for
 run time policy which will cover more run time policies for OpenStack,
 but
 not finished yet. (My shame for the slow progress). The related blueprint
 is [2], you can also get some discussion from [3]

 [1]
 http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=
 ANsubtype=CAhtmlfid=897/ENUS213-590appname=USN
 [2]
 https://blueprints.launchpad.net/nova/+spec/resource-
 optimization-service
 [3] http://markmail.org/~jaylau/OpenStack-DRS

 Thanks.


 2014-04-09 23:21 GMT+08:00 Oleg Gelbukh ogelb...@mirantis.com:

 Henrique,


 You should check out Gantt project [1], it could be exactly the place to
 implement such features. It is a generic cross-project Scheduler as a
 Service forked from Nova recently.

 [1] https://github.com/openstack/gantt

 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs


 On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta 
 henriquecostatr...@gmail.com wrote:

  Hello, everyone!

 I am currently a graduate student and member of a group of contributors
 to OpenStack. We believe that a dynamic scheduler could improve the
 efficiency of an OpenStack cloud, either by rebalancing nodes to
 maximize
 performance or to minimize the number of active hosts, in order to
 minimize
 energy costs. Therefore, we would like to propose a dynamic scheduling
 mechanism to Nova. The main idea is using the Ceilometer information
 (e.g.
 RAM, CPU, disk usage) through the ceilometer-client and dinamically
 decide
 whether a instance should be live migrated.

 This might me done as a Nova periodic task, which will be executed
 every
 once in a given period or as a new independent project. In both cases,
 the
 current Nova scheduler will not be affected, since this new scheduler
 will
 be pluggable. We have done a search and found no such initiative in the
 OpenStack BPs. Outside the community, we found only a recent IBM
 announcement for a similiar feature in one of its cloud products.

 A possible flow is: In the new scheduler, we periodically make a call
 to
 Nova, get the instance list from a specific host and, for each
 instance, we
 make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
 cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
 parameters configured by the user, analyze the meters and do the proper
 migrations.

 Do you have any comments or suggestions?

 --
 Ítalo Henrique Costa Truta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Oleg Gelbukh
Henrique,

You should check out Gantt project [1], it could be exactly the place to
implement such features. It is a generic cross-project Scheduler as a
Service forked from Nova recently.

[1] https://github.com/openstack/gantt

--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta henriquecostatr...@gmail.com
 wrote:

 Hello, everyone!

 I am currently a graduate student and member of a group of contributors to
 OpenStack. We believe that a dynamic scheduler could improve the efficiency
 of an OpenStack cloud, either by rebalancing nodes to maximize performance
 or to minimize the number of active hosts, in order to minimize energy
 costs. Therefore, we would like to propose a dynamic scheduling mechanism
 to Nova. The main idea is using the Ceilometer information (e.g. RAM, CPU,
 disk usage) through the ceilometer-client and dinamically decide whether a
 instance should be live migrated.

 This might me done as a Nova periodic task, which will be executed every
 once in a given period or as a new independent project. In both cases, the
 current Nova scheduler will not be affected, since this new scheduler will
 be pluggable. We have done a search and found no such initiative in the
 OpenStack BPs. Outside the community, we found only a recent IBM
 announcement for a similiar feature in one of its cloud products.

 A possible flow is: In the new scheduler, we periodically make a call to
 Nova, get the instance list from a specific host and, for each instance, we
 make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
 cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
 parameters configured by the user, analyze the meters and do the proper
 migrations.

 Do you have any comments or suggestions?

 --
 Ítalo Henrique Costa Truta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Oleg Gelbukh
What does PL stand for, anyway?

--
Best regards,
Oleg Gelbukh


On Mon, Mar 24, 2014 at 11:39 AM, Serg Melikyan smelik...@mirantis.comwrote:

 because 'dsl'/'language' terms are too broad.
 Too broad in general, but we choose name for sub-package, and in murano
 term 'language' mean Murano PL.

 +1 for language


 On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev tsuf...@mirantis.comwrote:

 +1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are too
 broad.

 On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi Serg,
 
  This idea sounds good, I suggest to use name 'murano.engine.murano_pl'
 (not
  just common name like 'language' or 'dsl', but name, which will be
 based on
  'MuranoPL')
 
  Do we plan to support the ability to define different languages for
 Murano
  Engine?
 
 
  Thank you!
 
 
  On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan smelik...@mirantis.com
  wrote:
 
  There is a idea to separate core of Murano PL implementation from
 engine
  specific code, like it was done in PoC. When this two things are
 separated
  in different packages, we will be able to track and maintain our
 language
  core as clean as possible from engine specific code. This will give to
 us an
  ability to easily separate our language implementation to a library.
 
  Questions is under what name we should place core of Murano PL?
 
  1) muranoapi.engine.language;
  2) muranoapi.engine.dsl;
  3) suggestions?
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
  +7 (495) 640-4904, 0261
  +7 (903) 156-0836
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] [smart-scenario-args]

2014-02-28 Thread Oleg Gelbukh
Sergey,

What do you think about adoption of/integration with other types of
resource definition languages used in OpenStack, for example, Heat
Orchestration Templates?

--
Best regards,
Oleg Gelbukh


On Thu, Feb 27, 2014 at 6:31 PM, Sergey Skripnick
sskripn...@mirantis.comwrote:


 Hello,

  Problem: what about deployment specific parts
 Template string in config? %imageid% or similar?
 Image name regex, rather than image name? so can work with multiple
 deployments, eg ^cirros$



 so we have a few solutions for today: function, vars, and special args.


 FUNCTION
 
 args: {image_id: {$func: img_by_reg, $args: [ubuntu.*]}}

 Flexible but configuration looks complex.

 VARS
 
 vars : {
 $image1 : {$func: img_by_reg, $args: [ubuntu.*]},
 $image2: {$func: img_by_reg, $args: [centos.*]}
 }
 args: {
image_id: $image1,
alt_image_id: $image2
 }

 This may be an addition to the first solution, but personally to me it
 looks like overkill.

 SPECIAL ARGS
 
 args: {image_re: {ubuntu.*}}

 Very simple configuration, but less flexible then others. IMO all three may
 be implemented.

 I vote for special args, and IMO functions may be implemented too.
 Please feel free to propose other solutions.

 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Oleg Gelbukh
+1 for Hugh, he's doing excellent job moving the project forward.

--
Best regards,
Oleg Gelbukh


On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick sskripn...@mirantis.comwrote:


 +1 for Hugh, but IMO no need to rush with Alexei's removal

 Hi stackers,

 I would like to:

 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews
 (and always testing patches=) ):
 http://stackalytics.com/report/reviews/rally/30

 2) Remove Alexei from core team, because unfortunately he is not able to
 work on Rally at this moment. Thank you Alexei for all work that you have
 done.


 Thoughts?


 Best regards,
 Boris Pavlovic


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-22 Thread Oleg Gelbukh
Hello, Jaromir

On Wed, Jan 22, 2014 at 4:09 PM, Jaromir Coufal jcou...@redhat.com wrote:


 I am leaning towards Role. We can be more specific with adding some extra
 word, e.g.:
 * Node Role


We use this term a lot internally for the very similar purpose, so it looks
reasonable to me.
Just my 2c.

--
Best regards,
Oleg Gelbukh


 * Deployment Role
 ... and if we are in the context of undercloud, people can shorten it to
 just Roles. But 'Resource Category' seems to me that it doesn't solve
 anything.


 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-20 Thread Oleg Gelbukh
I've finished the v0.1 spec of Rally API: http://docs.rallyapi.apiary.io/

The only thing that spec is missing at the moment is resource for Workloads
(/deployments/workloads). I will add this resource shortly.

Please, send your comments and suggestions.

--
Best regards,
Oleg Gelbukh


On Sun, Jan 19, 2014 at 11:28 AM, Oleg Gelbukh ogelb...@mirantis.comwrote:

 Yuriy, the idea is to choose something more or less general. 'Overcloud'
 would be very specific to my taste. It could also create confusion for
 users who want to depoy tests targets with other tools, like Fuel or
 Devstack.

 --
 Best regards,
 Oleg Gelbukh


 On Sun, Jan 19, 2014 at 1:17 AM, Yuriy Taraday yorik@gmail.comwrote:

 Hi all.

 I might be a little out of context, but isn't that thing deployed on some
 kind of cloud?


 * cluster -- is too generic, but also has connotations in HPC and
 various other technologies (databases, MQs, etc).

 * installation -- reminds me of a piece of performance art ;)

 * instance -- too much cross-terminology with server instance in Nova
 and Ironic


 In which case I'd suggest borrowing another option from TripleO:
 overcloud.

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-18 Thread Oleg Gelbukh
Yuriy, the idea is to choose something more or less general. 'Overcloud'
would be very specific to my taste. It could also create confusion for
users who want to depoy tests targets with other tools, like Fuel or
Devstack.

--
Best regards,
Oleg Gelbukh


On Sun, Jan 19, 2014 at 1:17 AM, Yuriy Taraday yorik@gmail.com wrote:

 Hi all.

 I might be a little out of context, but isn't that thing deployed on some
 kind of cloud?


 * cluster -- is too generic, but also has connotations in HPC and
 various other technologies (databases, MQs, etc).

 * installation -- reminds me of a piece of performance art ;)

 * instance -- too much cross-terminology with server instance in Nova
 and Ironic


 In which case I'd suggest borrowing another option from TripleO:
 overcloud.

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-16 Thread Oleg Gelbukh
On Thu, Jan 16, 2014 at 2:21 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Wed, 2014-01-15 at 21:46 +, Hugh Saunders wrote:
  On 15 January 2014 21:14, Ilya Kharin ikha...@mirantis.com wrote:
 
  Hi, guys,
 
  In Rally there is an entity that represents installed instance
  of OpenStack.
  What you think about a proper name for the entity? (a
  Deployment, a Cluster, an Installation, an Instance or
  something else)
 
  I vote for Deployment.


Doesn't it sound a bit weird to deploy a Deployment? Otherwise, it does not
really matter how it is called as long as the naming is consistent.

I have another question. Shoud we think about separation of Deployment and
Endpoint entities in API? Deployment is an object managed by deployment
engine, while Endpoint can refer to existing installation which has nothing
to do with deployment engine. It means that different sets of operations
are applicable to those entities. What do you think?

--
Best regards,
Oleg Gelbukh



 ++

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-16 Thread Oleg Gelbukh
On Wed, Jan 15, 2014 at 10:25 PM, Alan Kavanagh
alan.kavan...@ericsson.comwrote:

  Cheers Guys



 So what would you recommend Oleg. Yes its for linux system.


Alan,

Approach proposed below (/dev/zero) is probably better as it allows to
perform at around 60MB/s. Another approach that I've seen flying around is
to generate random string and use it's hashes for dd. There are some
one-liners out there which do that with openssl, just one example:

openssl enc -aes-256-ctr -pass pass:$(dd if=/dev/urandom bs=128
count=1 2/dev/null | base64) -nosalt  /dev/zero  randomfile.bin

Hope this helps.

--
Best regards,
Oleg Gelbukh


 /Alan



 *From:* Oleg Gelbukh [mailto:ogelb...@mirantis.com]
 *Sent:* January-15-14 10:30 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [ironic] Disk Eraser





 On Wed, Jan 15, 2014 at 6:42 PM, Alexei Kornienko 
 alexei.kornie...@gmail.com wrote:

 If you are working on linux system following can help you:

 dd if=/dev/urandom of=/dev/sda bs=4k



 I would not recommend that as /dev/urandom is real slow (10-15 MB/s).



 --

 Best regards,

 Oleg Gelbukh




 :)
 Best Regards,



 On 01/15/2014 04:31 PM, Alan Kavanagh wrote:

   Hi fellow OpenStackers



 Does anyone have any recommendations on open source tools for disk
 erasure/data destruction software. I have so far looked at DBAN and disk
 scrubber and was wondering if ironic team have some better recommendations?



 BR

 Alan



 ___

 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-15 Thread Oleg Gelbukh
On Wed, Jan 15, 2014 at 6:42 PM, Alexei Kornienko 
alexei.kornie...@gmail.com wrote:

  If you are working on linux system following can help you:

 dd if=/dev/urandom of=/dev/sda bs=4k


I would not recommend that as /dev/urandom is real slow (10-15 MB/s).

--
Best regards,
Oleg Gelbukh



 :)
 Best Regards,


 On 01/15/2014 04:31 PM, Alan Kavanagh wrote:

  Hi fellow OpenStackers



 Does anyone have any recommendations on open source tools for disk
 erasure/data destruction software. I have so far looked at DBAN and disk
 scrubber and was wondering if ironic team have some better recommendations?



 BR

 Alan


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
Nachi,

Thanks for bringing this up. We've been thinking a lot about handling of
configurations while working on Rubick.

In my understanding, oslo.config could provide an interface to different
back-ends to store configuration parameters. It could be simple centralized
alternative to configuration files, like k-v store or SQL database. It also
could be something complicated, like a service of its own
(Configration-as-a-Service), with cross-services validation capability etc.

By the way, configuration as a service was mentioned in Solum session at
the last summit, which implies that such service could have more then one
application.

The first step to this could be abstracting a back-end in oslo.config and
implementing some simplistic driver, SQL or k-v storage. This could help to
outline requirements to future configuraiton service.

--
Best regards,
Oleg Gelbukh


On Thu, Jan 9, 2014 at 1:23 PM, Flavio Percoco fla...@redhat.com wrote:

 On 08/01/14 17:13 -0800, Nachi Ueno wrote:

 Hi folks

 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.

 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc

 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.

 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration

 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

 I'm very appreciate any comments on this.



 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 That's all I've got for now,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
On Thu, Jan 9, 2014 at 10:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.


Technically, that is already implemented in TripleO: configuration params
are stored in Heat templates metadata, and os-*-config scripts are applying
changes to that parameters on the nodes. I'm not sure if that could help
solve the use case you describe, as overcloud nodes probably won't have an
access to undercloud Heat server. But that counts as a centralized storage
of confguration information, from my standpoint.

--
Best regards,
Oleg Gelbukh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
On Fri, Jan 10, 2014 at 12:18 AM, Nachi Ueno na...@ntti3.com wrote:

 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  How do you see these interactions defined?  For instance, if I deploy
  a custom driver for Neutron, does that mean I also have to patch
  everything that will be talking to it (Nova, for instance) so they can
  agree on compatibility?

 Nova / Neutron talks with neturon api. so it should be OK because we
 are talking care
 backward compatibility in the REST API.

 The point in my example is neutron server + neutron l2 agent sync.


What about doing it the other way round, i.e. allow one server to query
certain configuration parameter(s) from the other via RPC? I believe I've
seen such proposal quite some time ago in Nova blueprints, but with no
actual implementation.

--
Best regards,
Oleg Gelbukh



  Also, I know that I run what is probably a more complicated cluster
  than most production clusters, but I can't think of very many
  configuration options that are globally in sync across the cluster.
  Hypervisors, network drivers, mysql servers, API endpoints...they all
  might vary between hosts/racks/etc.

 To support such heterogeneous environment is a purpose of this bp.
 Configuration dependency is pain point for me, and it's get more worse
 if the env is heterogeneous.

 I have also some experience to run openstack clusters, but it is still
 pain for me..

 My experience is something like this
 # Wow, new release! ohh this chef repo didn't supported..
 # hmm i should modify chef recipe.. hmm debug.. debug..


  On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
  Hi Jeremy
 
  Don't you think it is burden for operators if we should choose correct
  combination of config for multiple nodes even if we have chef and
  puppet?
 
  If we have some constraint or dependency in configurations, such logic
  should be in openstack source code.
  We can solve this issue if we have a standard way to know the config
  value of other process in the other host.
 
  Something like this.
  self.conf.host('host1').firewall_driver
 
  Then we can have a chef/or file baed config backend code for this for
 example.
 
  Best
  Nachi
 
 
  2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  +1 to Jay.  Existing tools are both better suited to the job and work
  quite well in their current state.  To address Nachi's first example,
  there's nothing preventing a Nova node in Chef from reading Neutron's
  configuration (either by using a (partial) search or storing the
  necessary information in the environment rather than in roles).  I
  assume Puppet offers the same.  Please don't re-invent this hugely
  complicated wheel.
 
  On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  Hi folks
  
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
  
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes configuration.
  For example, nova depends on neutron configuration etc
  
  My idea is to have config server in oslo.config, and let cfg.CONF
 get
  config from the server.
  This way has several benefits.
  
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
  
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
  I'm very appreciate any comments on this.
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO
 it's
  to live under its own project.
 
  Hi Nati and Flavio!
 
  So, I'm -1 on this idea, just because I think it belongs in the realm
 of
  configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
  tools are built to manage multiple configuration files and changes in
  them. Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack
 services.
  Instead of managing the config file templates as all of the tools
  currently do, the tools would need to essentially need to forego the
  tried-and-true INI files and instead write a bunch of code in order to
  deal with REST API set/get operations for changing configuration data.
 
  In summary, while I agree that OpenStack services have an absolute TON
  of configurability -- for good and bad -- there are ways to improve

Re: [openstack-dev] [Ironic]Communication between Nova and Ironic

2013-12-29 Thread Oleg Gelbukh
Leslie,

This discussion is very interesting indeed :)

The current approach to auto-scale is that it is decided upon by Heat
service. Heat templates have special mechanisms to trigger auto-scaling of
resources when certain conditions are met.
In combination with Ironic, it has powerful potential for
OpenStack-on-OpenStack use case you're describing.

Baiscally, Heat has all orchestration functions in OpenStack. I see it as a
natural place for other interesting things like auto-migration of workloads
and so on.

--
Best regards,
Oleg Gelbukh


On Sun, Dec 29, 2013 at 8:03 AM, LeslieWang wqyu...@hotmail.com wrote:

 Hi Client,

 Current ironic call is for add/delete baremetl server, not with
 auto-scale. As we discussed in another thread. What I'm thinking is
 related with auto-scale baremetal server. In my mind, the logic can be
   1. Nova scheduler determines scale up one baremetal server.
   2. Nova scheduler notify ironic (or other API?) to power up the server.
   3. if ironic (or other service?) returns success, nova scheduler can
 call ironic to add the baremetal server into cluster.

 Of course, this is not a sole way for auto-scale. As you specified in
 another thread, auto-scale can be triggered from under-cloud or other
 monitoring service. Just try to bring up the interesting discussion. :-)

 Best Regards
 Leslie

  From: cl...@fewbar.com
  To: openstack-dev@lists.openstack.org
  Date: Sat, 28 Dec 2013 13:40:08 -0800
  Subject: Re: [openstack-dev] [Ironic]Communication between Nova and
 Ironic

 
  Excerpts from LeslieWang's message of 2013-12-24 03:01:51 -0800:
   Hi Oleg,
  
   Thanks for your promptly reply and detail explanation. Merry Christmas
 and wish you have a happy new year!
  
   At the same time, I think we can discuss more on Ironic is for backend
 driver for nova. I'm new in ironic. Per my understanding, the purpose of
 bare metal as a backend driver is to solve the problem that some appliance
 systems can not be virtualized, but operator still wants same cloud
 management system to manage these systems. With the help of ironic,
 operator can achieve the goal, and use one openstack to manage these
 systems as VMs, create, delete, deploy image etc. this is one typical use
 case.
  
   In addition, actually I'm thinking another interesting use case.
 Currently openstack requires ops to pre-install all servers. TripleO try to
 solve this problem and bootstrap openstack using openstack. However, what
 is missing here is dynamic power on VM/switches/storage only. Imagine
 initially lab only had one all-in-one openstack controller. The whole work
 flow can be:
   1. Users request one VM or baremetal server through portal.
   2. Horizon sends request to nova-scheduler
   3. Nova-scheduler finds no server, then invoke ironic api to power on
 one through IPMI, and install either hyper visor or appliance directly.
   4. If it need create VM, Nova-scheduler will find one compute node,
 and send message for further processing.
  
   Based on this use case, I'm thinking whether it makes sense to embed
 this ironic invokation logic in nova-scheduler, or another approach is as
 overall orchestration manager, TripleO project has a TripleO-scheduler to
 firstly intercept the message, invoke ironic api, then heat api which calls
 nova api, neutron api, storage api. In this case, TripleO only powers on
 baremetal server running VM, nova is responsible to power on baremetal
 server running appliance system. Sounds like latter one is a good solution,
 however the prior one also works. So can you please comment on it? Thanks!
  
 
  I think this basically already works the way you desire. The scheduler
  _does_ decide to call ironic, it just does so through nova-compute RPC
  calls. That is important, as this allows the scheduler to hand-off the
  entire work-flow of provisioning a machine to nova-compute in the exact
  same way as is done for regular cloud workloads.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Oleg Gelbukh
I'd +1 Clint on this. I believe that the only right way to handle SIGHUP
for process running in foreground is to terminate.

--
Best regards,
Oleg Gelbukh


On Fri, Dec 20, 2013 at 10:54 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Sean Dague's message of 2013-12-19 16:33:12 -0800:
  So a few people had been reporting recently that unstack no longer stops
  nova processes, which I only got around to looking at today. It turns
  out the new common.service stack from oslo takes SIGHUP and treats it as
  a restart. Which isn't wrong, but is new, and is incompatible with
  screen (the way we use it). Because we use -X stuff, the resulting -X
  quit sends SIGHUP to the child processes.
 
  So the question is, are we definitely in a state now where nova services
  can and do want to support SIGHUP as restart?
 
  If so, is there interest in being able to disable that behavior at start
  time, so we can continue with a screen based approach as well?
 
  If not, we'll need to figure out another way to approach the shutdown in
  devstack. Which is fine, just work that wasn't expected.
 

 Perhaps if the process is running in the foreground, as it does in
 devstack, it should still terminate on SIGHUP rather than restart.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-20 Thread Oleg Gelbukh
Hi everyone,

I'm sorry for being late to the thread, but what about baremetal driver?
Should it support the get_diagnostics() as well?

--
Best regards,
Oleg Gelbukh


On Thu, Dec 19, 2013 at 8:21 PM, Vladik Romanovsky 
vladik.romanov...@enovance.com wrote:

 Ah, I think I've responded too fast, sorry.

 meter-list provides a list of various measurements that are being done per
 resource.
 sample-list provides a list of samples per every meter: ceilometer
 sample-list --meter cpu_util -q resource_id=vm_uuid
 These samples can be aggregated over a period of time per every meter and
 resource:
 ceilometer statistics -m cpu_util -q
 'timestampSTART;timestamp=END;resource_id=vm_uuid' --period 3600

 Vladik



 - Original Message -
  From: Daniel P. Berrange berra...@redhat.com
  To: Vladik Romanovsky vladik.romanov...@enovance.com
  Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, John
  Garbutt j...@johngarbutt.com
  Sent: Thursday, 19 December, 2013 10:37:27 AM
  Subject: Re: [openstack-dev] [nova] VM diagnostics - V3 proposal
 
  On Thu, Dec 19, 2013 at 03:47:30PM +0100, Vladik Romanovsky wrote:
   I think it was:
  
   ceilometer sample-list -m cpu_util -q 'resource_id=vm_uuid'
 
  Hmm, a standard devstack deployment of ceilometer doesn't seem to
  record any performance stats at all - just shows me the static
  configuration parameters :-(
 
   ceilometer meter-list  -q
 'resource_id=296b22c6-2a4d-4a8d-a7cd-2d73339f9c70'
 
 +-+---+--+--+--+--+
  | Name| Type  | Unit | Resource ID
  | | User ID  | Project ID
  | |
 
 +-+---+--+--+--+--+
  | disk.ephemeral.size | gauge | GB   |
  | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
 96f9a624a325473daf4cd7875be46009 |
  | ec26984024c1438e8e2f93dc6a8c5ad0 |
  | disk.root.size  | gauge | GB   |
  | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
 96f9a624a325473daf4cd7875be46009 |
  | ec26984024c1438e8e2f93dc6a8c5ad0 |
  | instance| gauge | instance |
  | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
 96f9a624a325473daf4cd7875be46009 |
  | ec26984024c1438e8e2f93dc6a8c5ad0 |
  | instance:m1.small   | gauge | instance |
  | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
 96f9a624a325473daf4cd7875be46009 |
  | ec26984024c1438e8e2f93dc6a8c5ad0 |
  | memory  | gauge | MB   |
  | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
 96f9a624a325473daf4cd7875be46009 |
  | ec26984024c1438e8e2f93dc6a8c5ad0 |
  | vcpus   | gauge | vcpu |
  | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
 96f9a624a325473daf4cd7875be46009 |
  | ec26984024c1438e8e2f93dc6a8c5ad0 |
 
 +-+---+--+--+--+--+
 
 
  If the admin user can't rely on ceilometer guaranteeing availability of
  the performance stats at all, then I think having an API in nova to
 report
  them is in fact justifiable. In fact it is probably justifiable no matter
  what as a fallback way to check that VMs are doing in the fact of failure
  of ceilometer / part of the cloud infrastructure.
 
  Daniel
  --
  |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/ :|
  |: http://libvirt.org  -o-
 http://virt-manager.org :|
  |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/ :|
  |: http://entangle-photo.org   -o-
 http://live.gnome.org/gtk-vnc :|
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-20 Thread Oleg Gelbukh
Matt,

My understanding is that there will be a nova.virt.baremetal.ironic driver
in Nova which will talk to Ironic API to manage bare-metal instances. So,
Ironic will be actually providing the diagnostics data about bm instance
via its API eventually.

Hope someone will correct me if I'm wrong.

--
Best regards,
Oleg Gelbukh


On Fri, Dec 20, 2013 at 7:12 PM, Matt Riedemann
mrie...@linux.vnet.ibm.comwrote:



 On Friday, December 20, 2013 3:57:15 AM, Daniel P. Berrange wrote:

 On Fri, Dec 20, 2013 at 12:56:47PM +0400, Oleg Gelbukh wrote:

 Hi everyone,

 I'm sorry for being late to the thread, but what about baremetal driver?
 Should it support the get_diagnostics() as well?


 Of course, where practical, every driver should aim to support every
 method in the virt driver class API.

 Regards,
 Daniel


 Although isn't the baremetal driver moving to ironic, or there is an
 ironic driver moving into nova?  I'm a bit fuzzy on what's going on there.
  Point is, if we're essentially halting feature development on the nova
 baremetal driver I'd hold off on implementing get_diagnostics there for now.

 --

 Thanks,

 Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware VCenter Driver

2013-12-18 Thread Oleg Gelbukh
Ray,

Actually, you can. There is an ESX driver in OpenStack as well as vCenter.
However, it does not have benefits of vSphere/vCenter, like DRS.

It would probably help if you described your use case and specify why you
want to identify every ESXi host.

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Wed, Dec 18, 2013 at 10:21 AM, Ray Sun xiaoq...@gmail.com wrote:

 Hi Stackers,
 I just looked into the VMware Vcenter Driver, seems it manages a vcenter
 cluster as a single compute node, even it contains more than 1 physical
 servers. It's not very connivence to know what's the real resource I had in
 my cluster.

 Is there any reason why we don't identify every ESXI host in OpenStack?

 Thanks.

 Best Regards
 -- Ray

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Healthnmon

2013-12-18 Thread Oleg Gelbukh
I would copy that question. Looks like integration plan didn't work out,
and healthnmon development either stalled or gone shadow..

Anyone have information on that?

--
Best regards,
Oleg Gelbukh
Mirnatis Inc.


On Tue, Dec 17, 2013 at 11:29 PM, David S Taylor da...@bluesunrise.comwrote:

 Could anyone tell me about the status of the Healthnmon project [1]? There
 is a proposal [2] to integrate Ceilometer and Healthnmon, which is about 1
 year old. I am interested in developing a monitoring solution, and
 discovered that there may already be a project and community in place
 around OpenStack monitoring, or not 

 [1] https://github.com/stackforge/healthnmon/tree/master/healthnmon
 [2] https://wiki.openstack.org/wiki/Ceilometer/CeilometerAndHealthnmon

 Thanks,

 --
 David S Taylor
 CTO, Bluesunrise
 707 529-9194
 da...@bluesunrise.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread Oleg Gelbukh
Chmouel,

We reviewed the design of this feature at the summit with CERN and HP
teams. Centralized quota storage in Keystone is an anticipated feature, but
there are concerns about adding quota enforcement logic for every service
to Keystone. The agreed solution is to add quota numbers storage to
Keystone, and add mechanism that will notify services about change to the
quota. Service, in turn, will update quota cache and apply the new quota
value according to its own enforcement rules.

More detailed capture of the discussion on etherpad:
https://etherpad.openstack.org/p/CentralizedQuotas

Re this particular change, we plan to reuse this API extension code, but
extended to support domain-level quota as well.

--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah chmo...@enovance.comwrote:

 Hello,

 I was wondering what was the status of Keystone being the central place
 across all OpenStack projects for quotas.

 There is already an implementation from Dmitry here :

 https://review.openstack.org/#/c/40568/

 but hasn't seen activities since october waiting for icehouse development
 to be started and a few bits to be cleaned and added (i.e: the sqlite
 migration).

 It would be great if we can get this rekicked to get that for icehouse-2.

 Thanks,
 Chmouel.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >