Re: [openstack-dev] [fuel] discovery and deploy a compute node automatically

2016-05-25 Thread jason
Hi Aleksandr,
Thanks for the examples! That will really help me a lot.
On May 25, 2016 6:26 PM, "Aleksandr Didenko"  wrote:

> Hi,
>
> +1 to Igor. It should be easily doable via some sort of "watcher" script
> (run it as a daemon or under cron), that script should:
>
> - watch for new nodes in 'discover' state. CLI example:
>   fuel nodes
> - assign new nodes to env with compute role. CLI example:
>   fuel --env $ENV_ID node set --node $NEW_NODE_ID --role compute
> - update networks assignment for new node. CLI example:
>   fuel node --node $NEW_NODE_ID --network --download
>   # edit /root/node_$NEW_NODE_ID/interfaces.yaml
>   fuel node --node $NEW_NODE_ID --network --upload
> - deploy changes. CLI example:
>   fuel deploy-changes --env $ENV_ID
>
> Regards,
> Alex
>
> On Wed, May 25, 2016 at 12:03 PM, Igor Kalnitsky 
> wrote:
>
>> Hey Jason,
>>
>> What do you mean by "automatically"?
>>
>> You need to assign "compute" role on that discovered node, and hit
>> "Deploy Changes" button. If you really want to deploy any new discovered
>> node automatically, I think you can create some automation script and put
>> it under cron.
>>
>> Hope it helps,
>> Igor
>>
>> > On May 25, 2016, at 12:33, jason  wrote:
>> >
>> > Hi All,
>> >
>> > Is there any way for fuel to deploy a newly discovered node as a
>> compute node automatically? I followed the openstack doc for fuel but did
>> not get any answer.
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] discovery and deploy a compute node automatically

2016-05-25 Thread jason
Hi Igor,

Thanks, and yes you got my point, my "automatically ", means after a new
node has been discovered , the deployement process starts automatically.
Cron may help, but what if I need more info to check if that new discovered
node deserves to be a compute node or not? Can the cron script  get more
characteristics info about the node? For example , "if the new node has
right amount of nic interfaces, right setting of NUMA etc., then make it a
compute node with the same configuration as others with the same
characteristics".
On May 25, 2016 6:03 PM, "Igor Kalnitsky"  wrote:

> Hey Jason,
>
> What do you mean by "automatically"?
>
> You need to assign "compute" role on that discovered node, and hit "Deploy
> Changes" button. If you really want to deploy any new discovered node
> automatically, I think you can create some automation script and put it
> under cron.
>
> Hope it helps,
> Igor
>
> > On May 25, 2016, at 12:33, jason  wrote:
> >
> > Hi All,
> >
> > Is there any way for fuel to deploy a newly discovered node as a compute
> node automatically? I followed the openstack doc for fuel but did not get
> any answer.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] discovery and deploy a compute node automatically

2016-05-25 Thread Aleksandr Didenko
Hi,

+1 to Igor. It should be easily doable via some sort of "watcher" script
(run it as a daemon or under cron), that script should:

- watch for new nodes in 'discover' state. CLI example:
  fuel nodes
- assign new nodes to env with compute role. CLI example:
  fuel --env $ENV_ID node set --node $NEW_NODE_ID --role compute
- update networks assignment for new node. CLI example:
  fuel node --node $NEW_NODE_ID --network --download
  # edit /root/node_$NEW_NODE_ID/interfaces.yaml
  fuel node --node $NEW_NODE_ID --network --upload
- deploy changes. CLI example:
  fuel deploy-changes --env $ENV_ID

Regards,
Alex

On Wed, May 25, 2016 at 12:03 PM, Igor Kalnitsky 
wrote:

> Hey Jason,
>
> What do you mean by "automatically"?
>
> You need to assign "compute" role on that discovered node, and hit "Deploy
> Changes" button. If you really want to deploy any new discovered node
> automatically, I think you can create some automation script and put it
> under cron.
>
> Hope it helps,
> Igor
>
> > On May 25, 2016, at 12:33, jason  wrote:
> >
> > Hi All,
> >
> > Is there any way for fuel to deploy a newly discovered node as a compute
> node automatically? I followed the openstack doc for fuel but did not get
> any answer.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] discovery and deploy a compute node automatically

2016-05-25 Thread Igor Kalnitsky
Hey Jason,

What do you mean by "automatically"?

You need to assign "compute" role on that discovered node, and hit "Deploy 
Changes" button. If you really want to deploy any new discovered node 
automatically, I think you can create some automation script and put it under 
cron.

Hope it helps,
Igor

> On May 25, 2016, at 12:33, jason  wrote:
> 
> Hi All,
> 
> Is there any way for fuel to deploy a newly discovered node as a compute node 
> automatically? I followed the openstack doc for fuel but did not get any 
> answer.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-05-25 Thread Simon Pasquier
Hi Adam,
Maybe you want to look into network templates [1]? Although the
documentation is a bit sparse, it allows you to define flexible network
mappings.
BR,
Simon
[1]
https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates

On Wed, May 25, 2016 at 10:26 AM, Adam Heczko  wrote:

> Thanks Alex, will experiment with it once again although AFAIR it doesn't
> solve thing I'd like to do.
> I'll come later to you in case of any questions.
>
>
> On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko  > wrote:
>
>> Hey Adam,
>>
>> in Fuel we have the following option (checkbox) on Network Setting tab:
>>
>> Assign public network to all nodes
>> When disabled, public network will be assigned to controllers only
>>
>> So if you uncheck it (by default it's unchecked) then public network and
>> 'br-ex' will exist on controllers only. Other nodes won't even have
>> "Public" network on node interface configuration UI.
>>
>> Regards,
>> Alex
>>
>> On Wed, May 25, 2016 at 9:43 AM, Adam Heczko 
>> wrote:
>>
>>> Hello Alex,
>>> I have a question about the proposed changes.
>>> Is it possible to introduce new vlan and associated bridge only for
>>> controllers?
>>> I think about DMZ use case and possibility to expose public IPs/VIP and
>>> API endpoints on controllers on a completely separate L2 network (segment
>>> vlan/bridge) not present on any other nodes than controllers.
>>> Thanks.
>>>
>>> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi folks,

 we had to revert those changes [0] since it's impossible to propery
 handle two different netconfig tasks for multi-role nodes. So everything
 stays as it was before - we have single task 'netconfig' to configure
 network for all roles and you don't need to change anything in your
 plugins. Sorry for inconvenience.

 Our current plan for fixing network idempotency is to keep one task but
 change 'cross-depends' parameter to yaql_exp. This will allow us to use
 single 'netconfig' task for all roles but at the same time we'll be able to
 properly order it: netconfig on non-controllers will be executed only
 aftetr 'virtual_ips' task.

 Regards,
 Alex

 [0] https://review.openstack.org/#/c/320530/


 On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> Hi all,
>
> please be aware that now we have two netconfig tasks (in Fuel 9.0+):
>
> - netconfig-controller - executed on controllers only
> - netconfig - executed on all other nodes
>
> puppet manifest is the same, but tasks are different. We had to do
> this [0] in order to fix network idempotency issues [1].
>
> So if you have 'netconfig' requirements in your plugin's tasks, please
> make sure to add 'netconfig-controller' as well, to work properly on
> controllers.
>
> Regards,
> Alex
>
> [0] https://bugs.launchpad.net/fuel/+bug/1541309
> [1]
> https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Adam Heczko
>>> Security Engineer @ Mirantis Inc.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-05-25 Thread Adam Heczko
Thanks Alex, will experiment with it once again although AFAIR it doesn't
solve thing I'd like to do.
I'll come later to you in case of any questions.


On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko 
wrote:

> Hey Adam,
>
> in Fuel we have the following option (checkbox) on Network Setting tab:
>
> Assign public network to all nodes
> When disabled, public network will be assigned to controllers only
>
> So if you uncheck it (by default it's unchecked) then public network and
> 'br-ex' will exist on controllers only. Other nodes won't even have
> "Public" network on node interface configuration UI.
>
> Regards,
> Alex
>
> On Wed, May 25, 2016 at 9:43 AM, Adam Heczko  wrote:
>
>> Hello Alex,
>> I have a question about the proposed changes.
>> Is it possible to introduce new vlan and associated bridge only for
>> controllers?
>> I think about DMZ use case and possibility to expose public IPs/VIP and
>> API endpoints on controllers on a completely separate L2 network (segment
>> vlan/bridge) not present on any other nodes than controllers.
>> Thanks.
>>
>> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko > > wrote:
>>
>>> Hi folks,
>>>
>>> we had to revert those changes [0] since it's impossible to propery
>>> handle two different netconfig tasks for multi-role nodes. So everything
>>> stays as it was before - we have single task 'netconfig' to configure
>>> network for all roles and you don't need to change anything in your
>>> plugins. Sorry for inconvenience.
>>>
>>> Our current plan for fixing network idempotency is to keep one task but
>>> change 'cross-depends' parameter to yaql_exp. This will allow us to use
>>> single 'netconfig' task for all roles but at the same time we'll be able to
>>> properly order it: netconfig on non-controllers will be executed only
>>> aftetr 'virtual_ips' task.
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] https://review.openstack.org/#/c/320530/
>>>
>>>
>>> On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi all,

 please be aware that now we have two netconfig tasks (in Fuel 9.0+):

 - netconfig-controller - executed on controllers only
 - netconfig - executed on all other nodes

 puppet manifest is the same, but tasks are different. We had to do this
 [0] in order to fix network idempotency issues [1].

 So if you have 'netconfig' requirements in your plugin's tasks, please
 make sure to add 'netconfig-controller' as well, to work properly on
 controllers.

 Regards,
 Alex

 [0] https://bugs.launchpad.net/fuel/+bug/1541309
 [1]
 https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Adam Heczko
>> Security Engineer @ Mirantis Inc.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-05-25 Thread Aleksandr Didenko
Hey Adam,

in Fuel we have the following option (checkbox) on Network Setting tab:

Assign public network to all nodes
When disabled, public network will be assigned to controllers only

So if you uncheck it (by default it's unchecked) then public network and
'br-ex' will exist on controllers only. Other nodes won't even have
"Public" network on node interface configuration UI.

Regards,
Alex

On Wed, May 25, 2016 at 9:43 AM, Adam Heczko  wrote:

> Hello Alex,
> I have a question about the proposed changes.
> Is it possible to introduce new vlan and associated bridge only for
> controllers?
> I think about DMZ use case and possibility to expose public IPs/VIP and
> API endpoints on controllers on a completely separate L2 network (segment
> vlan/bridge) not present on any other nodes than controllers.
> Thanks.
>
> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko 
> wrote:
>
>> Hi folks,
>>
>> we had to revert those changes [0] since it's impossible to propery
>> handle two different netconfig tasks for multi-role nodes. So everything
>> stays as it was before - we have single task 'netconfig' to configure
>> network for all roles and you don't need to change anything in your
>> plugins. Sorry for inconvenience.
>>
>> Our current plan for fixing network idempotency is to keep one task but
>> change 'cross-depends' parameter to yaql_exp. This will allow us to use
>> single 'netconfig' task for all roles but at the same time we'll be able to
>> properly order it: netconfig on non-controllers will be executed only
>> aftetr 'virtual_ips' task.
>>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/#/c/320530/
>>
>>
>> On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko > > wrote:
>>
>>> Hi all,
>>>
>>> please be aware that now we have two netconfig tasks (in Fuel 9.0+):
>>>
>>> - netconfig-controller - executed on controllers only
>>> - netconfig - executed on all other nodes
>>>
>>> puppet manifest is the same, but tasks are different. We had to do this
>>> [0] in order to fix network idempotency issues [1].
>>>
>>> So if you have 'netconfig' requirements in your plugin's tasks, please
>>> make sure to add 'netconfig-controller' as well, to work properly on
>>> controllers.
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] https://bugs.launchpad.net/fuel/+bug/1541309
>>> [1]
>>> https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-05-25 Thread Adam Heczko
Hello Alex,
I have a question about the proposed changes.
Is it possible to introduce new vlan and associated bridge only for
controllers?
I think about DMZ use case and possibility to expose public IPs/VIP and API
endpoints on controllers on a completely separate L2 network (segment
vlan/bridge) not present on any other nodes than controllers.
Thanks.

On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko 
wrote:

> Hi folks,
>
> we had to revert those changes [0] since it's impossible to propery handle
> two different netconfig tasks for multi-role nodes. So everything stays as
> it was before - we have single task 'netconfig' to configure network for
> all roles and you don't need to change anything in your plugins. Sorry for
> inconvenience.
>
> Our current plan for fixing network idempotency is to keep one task but
> change 'cross-depends' parameter to yaql_exp. This will allow us to use
> single 'netconfig' task for all roles but at the same time we'll be able to
> properly order it: netconfig on non-controllers will be executed only
> aftetr 'virtual_ips' task.
>
> Regards,
> Alex
>
> [0] https://review.openstack.org/#/c/320530/
>
>
> On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko 
> wrote:
>
>> Hi all,
>>
>> please be aware that now we have two netconfig tasks (in Fuel 9.0+):
>>
>> - netconfig-controller - executed on controllers only
>> - netconfig - executed on all other nodes
>>
>> puppet manifest is the same, but tasks are different. We had to do this
>> [0] in order to fix network idempotency issues [1].
>>
>> So if you have 'netconfig' requirements in your plugin's tasks, please
>> make sure to add 'netconfig-controller' as well, to work properly on
>> controllers.
>>
>> Regards,
>> Alex
>>
>> [0] https://bugs.launchpad.net/fuel/+bug/1541309
>> [1]
>> https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-05-25 Thread Aleksandr Didenko
Hi folks,

we had to revert those changes [0] since it's impossible to propery handle
two different netconfig tasks for multi-role nodes. So everything stays as
it was before - we have single task 'netconfig' to configure network for
all roles and you don't need to change anything in your plugins. Sorry for
inconvenience.

Our current plan for fixing network idempotency is to keep one task but
change 'cross-depends' parameter to yaql_exp. This will allow us to use
single 'netconfig' task for all roles but at the same time we'll be able to
properly order it: netconfig on non-controllers will be executed only
aftetr 'virtual_ips' task.

Regards,
Alex

[0] https://review.openstack.org/#/c/320530/


On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko 
wrote:

> Hi all,
>
> please be aware that now we have two netconfig tasks (in Fuel 9.0+):
>
> - netconfig-controller - executed on controllers only
> - netconfig - executed on all other nodes
>
> puppet manifest is the same, but tasks are different. We had to do this
> [0] in order to fix network idempotency issues [1].
>
> So if you have 'netconfig' requirements in your plugin's tasks, please
> make sure to add 'netconfig-controller' as well, to work properly on
> controllers.
>
> Regards,
> Alex
>
> [0] https://bugs.launchpad.net/fuel/+bug/1541309
> [1]
> https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] release version numbers: let's use semvers

2016-05-24 Thread Roman Prykhodchenko
The only thing I would like to mention here is that scripts for making 
automatic releases on PyPi using OpenStack Infra won’t work, if the version is 
not formatted according to semver.

- romcheg

> 24 трав. 2016 р. о 14:34 Igor Kalnitsky  написав(ла):
> 
> Hey Zigo,
> 
> In Python community there's a PEP-440 [1] that defines a versioning scheme. 
> The thing you should know is, the PEP __is not__ compatible with semver, and 
> it's totally fine to have two components version.
> 
> So I don't think we should force version changes from two-components to 
> three-components scheme, since it won't be compatible with semver anyway.
> 
> Thanks,
> Igor
> 
> [1]: https://www.python.org/dev/peps/pep-0440/
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] YAQL console for master node

2016-05-24 Thread Aleksandr Didenko
Hi,

thank you Stas, long awaited tool :) Using it right now on the latest
Fuel-10.0, very helpful and saves a lot of time (switching between nodes to
test yaql for different roles is super cool).

Regards,
Alex


On Tue, May 24, 2016 at 12:50 PM, Stanislaw Bogatkin  wrote:

> Hi all,
>
> as you maybe know, new conditions for Fuel tasks were recently (in master
> and mitaka branches) introduced. Right after this I got several questions
> like 'hey, how can I check my new condition?' Answer could be 'use standard
> yaql console', but it hasn't have Fuel internal yaql functions which were a
> foundation for Fuel task conditions. As a result, I have written small
> utility to give an opportunity for check new conditions on the fly: [0]. It
> still in development but usable for most tasks developer usually need when
> build new yaql condition for task.
>
> If you has any questions about using this tool or want to propose any
> improvement - don't hesitate and contact me. Or just fork it and do what
> you want - it licensed under GPLv3. I would be glad if it helps someone.
>
> [0] https://github.com/sorrowless/fuyaql
>
> --
> with best regards,
> Stan.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] release version numbers: let's use semvers

2016-05-24 Thread Igor Kalnitsky
Hey Zigo,

In Python community there's a PEP-440 [1] that defines a versioning scheme. The 
thing you should know is, the PEP __is not__ compatible with semver, and it's 
totally fine to have two components version.

So I don't think we should force version changes from two-components to 
three-components scheme, since it won't be compatible with semver anyway.

Thanks,
Igor

[1]: https://www.python.org/dev/peps/pep-0440/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-05-18 Thread Bogdan Dobrelya
On 05/17/2016 08:55 PM, Clint Byrum wrote:
> I missed your reply originally, so sorry for the 2 week lag...
> 
> Excerpts from Mike Bayer's message of 2016-04-30 15:14:05 -0500:
>>
>> On 04/30/2016 10:50 AM, Clint Byrum wrote:
>>> Excerpts from Roman Podoliaka's message of 2016-04-29 12:04:49 -0700:

>>>
>>> I'm curious why you think setting wsrep_sync_wait=1 wouldn't help.
>>>
>>> The exact example appears in the Galera documentation:
>>>
>>> http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep-sync-wait
>>>
>>> The moment you say 'SET SESSION wsrep_sync_wait=1', the behavior should
>>> prevent the list problem you see, and it should not matter that it is
>>> a separate session, as that is the entire point of the variable:
>>
>>
>> we prefer to keep it off and just point applications at a single node 
>> using master/passive/passive in HAProxy, so that we don't have the 
>> unnecessary performance hit of waiting for all transactions to 
>> propagate; we just stick on one node at a time.   We've fixed a lot of 
>> issues in our config in ensuring that HAProxy definitely keeps all 
>> clients on exactly one Galera node at a time.
>>
> 
> Indeed, haproxy does a good job at shifting over rapidly. But it's not
> atomic, so you will likely have a few seconds where commits landed on
> the new demoted backup.
> 
>>>
>>> "When you enable this parameter, the node triggers causality checks in
>>> response to certain types of queries. During the check, the node blocks
>>> new queries while the database server catches up with all updates made
>>> in the cluster to the point where the check was begun. Once it reaches
>>> this point, the node executes the original query."
>>>
>>> In the active/passive case where you never use the passive node as a
>>> read slave, one could actually set wsrep_sync_wait=1 globally. This will
>>> cause a ton of lag while new queries happen on the new active and old
>>> transactions are still being applied, but that's exactly what you want,
>>> so that when you fail over, nothing proceeds until all writes from the
>>> original active node are applied and available on the new active node.
>>> It would help if your failover technology actually _breaks_ connections
>>> to a presumed dead node, so writes stop happening on the old one.
>>
>> If HAProxy is failing over from the master, which is no longer 
>> reachable, to another passive node, which is reachable, that means that 
>> master is partitioned and will leave the Galera primary component.   It 
>> also means all current database connections are going to be bounced off, 
>> which will cause errors for those clients either in the middle of an 
>> operation, or if a pooled connection is reused before it is known that 
>> the connection has been reset.  So failover is usually not an error-free 
>> situation in any case from a database client perspective and retry 
>> schemes are always going to be needed.
>>
> 
> There are some really big assumptions above, so I want to enumerate
> them:
> 
> 1. You assume that a partition between haproxy and a node is a partition
>between that node and the other galera nodes.
> 2. You assume that I never want to failover on purpose, smoothly.
> 
> In the case of (1), there are absolutely times where the load balancer
> thinks a node is dead, and it is quite happily chugging along doing its
> job. Transactions will be already committed in this scenario that have
> not propagated, and there may be more than one load balancer, and only
> one of them thinks that node is dead.
> 
> For the limited partition problem, having wsrep_sync_wait turned on
> would result in consistency, and the lag would only be minimal as the
> transactions propagate onto the new primary server.
> 
> For the multiple haproxy problem, lag would be _horrible_ on all nodes
> that are getting reads as long as there's another one getting writes,
> so a solution for making sure only one is specified would need to be
> developed using a leader election strategy. If haproxy is able to query
> wsrep status, that might be ideal, as galera will in fact elect leaders
> for you (assuming all of your wsrep nodes are also mysql nodes, which
> is not the case if you're using 2 nodes + garbd for example).
> 
> This is, however, a bit of a strawman, as most people don't need
> active/active haproxy nodes, so the simplest solution is to go
> active/passive on your haproxy nodes with something like UCARP handling
> the failover there. As long as they all use the same primary/backup
> ordering, then a new UCARP target should just result in using the same
> node, and a very tiny window for inconsistency and connection errors.
> 
> The second assumption is handled by leader election as well. If there's
> always one leader node that load balancers send traffic to, then one
> should be able to force promotion of a different node as the leader,
> and all new transactions and queries go to the new leader. The window
> for that 

Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-05-18 Thread Simon Pasquier
Hi Matthew,

Thanks for the reply.

On Tue, May 17, 2016 at 5:33 PM, Matthew Mosesohn 
wrote:

> Hi Simon,
>
> For 8.0 and earlier, I would deploy ElasticSearch before deploy_end
> and LMA collector after post_deploy_start
>
>
Unfortunately this isn't possible because the final bits of the
Elasticsearch configuration need to happen only once all the ES nodes have
reached the cluster.
And I didn't find a way (with MOS8) to run this task during the deployment
phase after both the primary ES and ES groups have been executed.


> For Mitaka and Newton releases, the task graph now skips dependencies
> that are not found for the role being processed. Now this "requires"
> dependency will work that previously errored.
>

Good to know!

Simon


>
> Best Regards,
> Matthew Mosesohn
>
> On Tue, May 17, 2016 at 6:27 PM, Simon Pasquier 
> wrote:
> > I'm resurrecting this thread because I didn't manage to find a satisfying
> > solution to deal with this issue.
> >
> > First let me provide more context on the use case. The
> Elasticsearch/Kibana
> > and LMA collector plugins need to synchronize their deployment. Without
> too
> > many details, here is the workflow when both plugins are deployed:
> > 1. [Deployment] Install the Elasticsearch/Kibana primary node.
> > 2. [Deployment] Install the other Elasticsearch/Kibana nodes.
> > 3. [Post-Deployment] Configure the Elasticsearch cluster.
> > 4. [Post-Deployment] Install and configure the LMA collector.
> >
> > Task #4 should happen after #3 so we've specified the dependency in
> > deployement_tasks.yaml [0] but when the Elasticsearch/Kibana plugin isn't
> > deployed in the same environment (which is a valid case), it fails [1]
> with:
> >
> > Tasks 'elasticsearch-kibana-configuration, influxdb-configuration' can't
> be
> > in requires|required_for|groups|tasks for [lma-backends] because they
> don't
> > exist in the graph
> >
> > To workaround this restriction, we're using 'upload_nodes_info' as an
> anchor
> > task [2][3] since it is always present in the graph but this isn't really
> > elegant. Any suggestion to improve this?
> >
> > BR,
> > Simon
> >
> > [0]
> >
> https://github.com/openstack/fuel-plugin-lma-collector/blob/fd9337b43b6bdae6012f421e22847a1b0307ead0/deployment_tasks.yaml#L123-L139
> > [1] https://bugs.launchpad.net/lma-toolchain/+bug/1573087
> > [2]
> >
> https://github.com/openstack/fuel-plugin-lma-collector/blob/56ef5c42f4cd719958c4c2ac3fded1b08fe2b90f/deployment_tasks.yaml#L25-L37
> > [3]
> >
> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/4c5736dadf457b693c30e20d1a2679165ae1155a/deployment_tasks.yaml#L156-L173
> >
> > On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky  >
> > wrote:
> >>
> >> Hey folks,
> >>
> >> Simon P. wrote:
> >> > 1. Run task X for plugin A (if installed).
> >> > 2. Run task Y for plugin B (if installed).
> >> > 3. Run task Z for plugin A (if installed).
> >>
> >> Simon, could you please explain do you need this at the first place? I
> >> can imagine this case only if your two plugins are kinda dependent on
> >> each other. In this case, it's better to do what was said by Andrew W.
> >> - set 'Task Y' to require 'Task X' and that requirement will be
> >> satisfied anyway (even if Task X doesn't exist in the graph).
> >>
> >>
> >> Alex S. wrote:
> >> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
> >> > devs could leverage to have tasks executes at specific points in the
> >> > deploy process.
> >>
> >> Yeah, I think that may be useful sometime. However, I'd prefer to
> >> avoid anchor usage as much as possible. There's no guarantees that
> >> other plugin didn't make any destructive actions early, that breaks
> >> you later. Anchors is good way to resolve possible conflicts, but they
> >> aren't bulletproof.
> >>
> >> - igor
> >>
> >> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya <
> bdobre...@mirantis.com>
> >> wrote:
> >> > On 27.01.2016 14:44, Simon Pasquier wrote:
> >> >> Hi,
> >> >>
> >> >> I see that tasks.yaml is going to be deprecated in the future MOS
> >> >> versions [1]. I've got one question regarding the ordering of tasks
> >> >> between different plugins.
> >> >> With tasks.yaml, it was possible to coordinate the execution of tasks
> >> >> between plugins without prior knowledge of which plugins were
> installed
> >> >> [2].
> >> >> For example, lets say we have 2 plugins: A and B. The plugins may or
> >> >> may
> >> >> not be installed in the same environment and the tasks execution
> should
> >> >> be:
> >> >> 1. Run task X for plugin A (if installed).
> >> >> 2. Run task Y for plugin B (if installed).
> >> >> 3. Run task Z for plugin A (if installed).
> >> >>
> >> >> Right now, we can set task priorities like:
> >> >>
> >> >> # tasks.yaml for plugin A
> >> >> - role: ['*']
> >> >>   stage: post_deployment/1000
> >> >>   type: puppet
> >> >>   parameters:
> >> >> puppet_manifest: 

Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-05-17 Thread Clint Byrum
I missed your reply originally, so sorry for the 2 week lag...

Excerpts from Mike Bayer's message of 2016-04-30 15:14:05 -0500:
> 
> On 04/30/2016 10:50 AM, Clint Byrum wrote:
> > Excerpts from Roman Podoliaka's message of 2016-04-29 12:04:49 -0700:
> >>
> >
> > I'm curious why you think setting wsrep_sync_wait=1 wouldn't help.
> >
> > The exact example appears in the Galera documentation:
> >
> > http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep-sync-wait
> >
> > The moment you say 'SET SESSION wsrep_sync_wait=1', the behavior should
> > prevent the list problem you see, and it should not matter that it is
> > a separate session, as that is the entire point of the variable:
> 
> 
> we prefer to keep it off and just point applications at a single node 
> using master/passive/passive in HAProxy, so that we don't have the 
> unnecessary performance hit of waiting for all transactions to 
> propagate; we just stick on one node at a time.   We've fixed a lot of 
> issues in our config in ensuring that HAProxy definitely keeps all 
> clients on exactly one Galera node at a time.
> 

Indeed, haproxy does a good job at shifting over rapidly. But it's not
atomic, so you will likely have a few seconds where commits landed on
the new demoted backup.

> >
> > "When you enable this parameter, the node triggers causality checks in
> > response to certain types of queries. During the check, the node blocks
> > new queries while the database server catches up with all updates made
> > in the cluster to the point where the check was begun. Once it reaches
> > this point, the node executes the original query."
> >
> > In the active/passive case where you never use the passive node as a
> > read slave, one could actually set wsrep_sync_wait=1 globally. This will
> > cause a ton of lag while new queries happen on the new active and old
> > transactions are still being applied, but that's exactly what you want,
> > so that when you fail over, nothing proceeds until all writes from the
> > original active node are applied and available on the new active node.
> > It would help if your failover technology actually _breaks_ connections
> > to a presumed dead node, so writes stop happening on the old one.
> 
> If HAProxy is failing over from the master, which is no longer 
> reachable, to another passive node, which is reachable, that means that 
> master is partitioned and will leave the Galera primary component.   It 
> also means all current database connections are going to be bounced off, 
> which will cause errors for those clients either in the middle of an 
> operation, or if a pooled connection is reused before it is known that 
> the connection has been reset.  So failover is usually not an error-free 
> situation in any case from a database client perspective and retry 
> schemes are always going to be needed.
> 

There are some really big assumptions above, so I want to enumerate
them:

1. You assume that a partition between haproxy and a node is a partition
   between that node and the other galera nodes.
2. You assume that I never want to failover on purpose, smoothly.

In the case of (1), there are absolutely times where the load balancer
thinks a node is dead, and it is quite happily chugging along doing its
job. Transactions will be already committed in this scenario that have
not propagated, and there may be more than one load balancer, and only
one of them thinks that node is dead.

For the limited partition problem, having wsrep_sync_wait turned on
would result in consistency, and the lag would only be minimal as the
transactions propagate onto the new primary server.

For the multiple haproxy problem, lag would be _horrible_ on all nodes
that are getting reads as long as there's another one getting writes,
so a solution for making sure only one is specified would need to be
developed using a leader election strategy. If haproxy is able to query
wsrep status, that might be ideal, as galera will in fact elect leaders
for you (assuming all of your wsrep nodes are also mysql nodes, which
is not the case if you're using 2 nodes + garbd for example).

This is, however, a bit of a strawman, as most people don't need
active/active haproxy nodes, so the simplest solution is to go
active/passive on your haproxy nodes with something like UCARP handling
the failover there. As long as they all use the same primary/backup
ordering, then a new UCARP target should just result in using the same
node, and a very tiny window for inconsistency and connection errors.

The second assumption is handled by leader election as well. If there's
always one leader node that load balancers send traffic to, then one
should be able to force promotion of a different node as the leader,
and all new transactions and queries go to the new leader. The window
for that would be pretty small, and so wsrep_sync_wait time should
be able to be very low, if not 0. I'm not super familiar with the way
haproxy gracefully 

Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-05-17 Thread Bogdan Dobrelya
On 04/22/2016 05:42 PM, Bogdan Dobrelya wrote:
> [crossposting to openstack-operat...@lists.openstack.org]
> 
> Hello.
> I wrote this paper [0] to demonstrate an approach how we can leverage a
> Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
> (DB) or Trove, Tooz DLM and perhaps for any integration projects which
> rely on distributed systems. Although all tests are yet to be finished,
> results are quite visible, so I better off share early for a review,
> discussion and comments.
> 
> I have similar tests done for the RabbitMQ OCF RA clusterers as well,
> although have yet wrote a report.
> 
> PS. I'm sorry for so many tags I placed in the topic header, should I've
> used just "all" :) ? Have a nice weekends and take care!
> 
> [0] https://goo.gl/VHyIIE
> 

[ cross posting to operators ]

An update.
I added Appendix B there I made a few more tests dancing mostly around
that funny topic [0] full of interesting nuances, and there I tried to
cover some generic patterns OpenStack uses for transactions constructed
by sqlalchemy's ORM (I hope so).

Those test cases cover a5a read skews, SERIALIZABLE / RR / RC TI levels,
lock modes for select, and wsrep_sync_wait Galera settings. And reworked
conclusions and recommendations sections by the new tests results as well.

For now, I've finished all items I had on my TODO list for that paper.
If anyone would like to do more test runs, re-use the given approach to
the cluster-labs' upstream OCF RA or re-check with another configuration
tunings (mostly to wsrep& like things perhaps), you're welcome! I'm open
for questions, if any.

Also note, that with all submitted fixes to those multiple
testing-discovered bugs, cluster recovery after network partitions have
been working almost seamlessly, for me :-) For those who interested, the
full list of related bugs is easy to locate in this backport's commit
message [2].

The link is the same [1].

[0] https://goo.gl/YWEc5A
[1] https://goo.gl/VHyIIE
[2] https://review.openstack.org/#/c/315989/

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-05-17 Thread Matthew Mosesohn
Hi Simon,

For 8.0 and earlier, I would deploy ElasticSearch before deploy_end
and LMA collector after post_deploy_start

For Mitaka and Newton releases, the task graph now skips dependencies
that are not found for the role being processed. Now this "requires"
dependency will work that previously errored.

Best Regards,
Matthew Mosesohn

On Tue, May 17, 2016 at 6:27 PM, Simon Pasquier  wrote:
> I'm resurrecting this thread because I didn't manage to find a satisfying
> solution to deal with this issue.
>
> First let me provide more context on the use case. The Elasticsearch/Kibana
> and LMA collector plugins need to synchronize their deployment. Without too
> many details, here is the workflow when both plugins are deployed:
> 1. [Deployment] Install the Elasticsearch/Kibana primary node.
> 2. [Deployment] Install the other Elasticsearch/Kibana nodes.
> 3. [Post-Deployment] Configure the Elasticsearch cluster.
> 4. [Post-Deployment] Install and configure the LMA collector.
>
> Task #4 should happen after #3 so we've specified the dependency in
> deployement_tasks.yaml [0] but when the Elasticsearch/Kibana plugin isn't
> deployed in the same environment (which is a valid case), it fails [1] with:
>
> Tasks 'elasticsearch-kibana-configuration, influxdb-configuration' can't be
> in requires|required_for|groups|tasks for [lma-backends] because they don't
> exist in the graph
>
> To workaround this restriction, we're using 'upload_nodes_info' as an anchor
> task [2][3] since it is always present in the graph but this isn't really
> elegant. Any suggestion to improve this?
>
> BR,
> Simon
>
> [0]
> https://github.com/openstack/fuel-plugin-lma-collector/blob/fd9337b43b6bdae6012f421e22847a1b0307ead0/deployment_tasks.yaml#L123-L139
> [1] https://bugs.launchpad.net/lma-toolchain/+bug/1573087
> [2]
> https://github.com/openstack/fuel-plugin-lma-collector/blob/56ef5c42f4cd719958c4c2ac3fded1b08fe2b90f/deployment_tasks.yaml#L25-L37
> [3]
> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/4c5736dadf457b693c30e20d1a2679165ae1155a/deployment_tasks.yaml#L156-L173
>
> On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky 
> wrote:
>>
>> Hey folks,
>>
>> Simon P. wrote:
>> > 1. Run task X for plugin A (if installed).
>> > 2. Run task Y for plugin B (if installed).
>> > 3. Run task Z for plugin A (if installed).
>>
>> Simon, could you please explain do you need this at the first place? I
>> can imagine this case only if your two plugins are kinda dependent on
>> each other. In this case, it's better to do what was said by Andrew W.
>> - set 'Task Y' to require 'Task X' and that requirement will be
>> satisfied anyway (even if Task X doesn't exist in the graph).
>>
>>
>> Alex S. wrote:
>> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
>> > devs could leverage to have tasks executes at specific points in the
>> > deploy process.
>>
>> Yeah, I think that may be useful sometime. However, I'd prefer to
>> avoid anchor usage as much as possible. There's no guarantees that
>> other plugin didn't make any destructive actions early, that breaks
>> you later. Anchors is good way to resolve possible conflicts, but they
>> aren't bulletproof.
>>
>> - igor
>>
>> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya 
>> wrote:
>> > On 27.01.2016 14:44, Simon Pasquier wrote:
>> >> Hi,
>> >>
>> >> I see that tasks.yaml is going to be deprecated in the future MOS
>> >> versions [1]. I've got one question regarding the ordering of tasks
>> >> between different plugins.
>> >> With tasks.yaml, it was possible to coordinate the execution of tasks
>> >> between plugins without prior knowledge of which plugins were installed
>> >> [2].
>> >> For example, lets say we have 2 plugins: A and B. The plugins may or
>> >> may
>> >> not be installed in the same environment and the tasks execution should
>> >> be:
>> >> 1. Run task X for plugin A (if installed).
>> >> 2. Run task Y for plugin B (if installed).
>> >> 3. Run task Z for plugin A (if installed).
>> >>
>> >> Right now, we can set task priorities like:
>> >>
>> >> # tasks.yaml for plugin A
>> >> - role: ['*']
>> >>   stage: post_deployment/1000
>> >>   type: puppet
>> >>   parameters:
>> >> puppet_manifest: puppet/manifests/task_X.pp
>> >> puppet_modules: puppet/modules
>> >>
>> >> - role: ['*']
>> >>   stage: post_deployment/3000
>> >>   type: puppet
>> >>   parameters:
>> >> puppet_manifest: puppet/manifests/task_Z.pp
>> >> puppet_modules: puppet/modules
>> >>
>> >> # tasks.yaml for plugin B
>> >> - role: ['*']
>> >>   stage: post_deployment/2000
>> >>   type: puppet
>> >>   parameters:
>> >> puppet_manifest: puppet/manifests/task_Y.pp
>> >> puppet_modules: puppet/modules
>> >>
>> >> How would it be handled without tasks.yaml?
>> >
>> > I created a kinda related bug [0] and submitted a patch [1] to MOS docs
>> > [2] to kill some entropy on the topic of tasks schema 

Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-05-17 Thread Simon Pasquier
I'm resurrecting this thread because I didn't manage to find a satisfying
solution to deal with this issue.

First let me provide more context on the use case. The Elasticsearch/Kibana
and LMA collector plugins need to synchronize their deployment. Without too
many details, here is the workflow when both plugins are deployed:
1. [Deployment] Install the Elasticsearch/Kibana primary node.
2. [Deployment] Install the other Elasticsearch/Kibana nodes.
3. [Post-Deployment] Configure the Elasticsearch cluster.
4. [Post-Deployment] Install and configure the LMA collector.

Task #4 should happen after #3 so we've specified the dependency in
deployement_tasks.yaml [0] but when the Elasticsearch/Kibana plugin isn't
deployed in the same environment (which is a valid case), it fails [1] with:

Tasks 'elasticsearch-kibana-configuration, influxdb-configuration' can't be
in requires|required_for|groups|tasks for [lma-backends] because they don't
exist in the graph

To workaround this restriction, we're using 'upload_nodes_info' as an
anchor task [2][3] since it is always present in the graph but this isn't
really elegant. Any suggestion to improve this?

BR,
Simon

[0]
https://github.com/openstack/fuel-plugin-lma-collector/blob/fd9337b43b6bdae6012f421e22847a1b0307ead0/deployment_tasks.yaml#L123-L139
[1] https://bugs.launchpad.net/lma-toolchain/+bug/1573087
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/56ef5c42f4cd719958c4c2ac3fded1b08fe2b90f/deployment_tasks.yaml#L25-L37
[3]
https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/4c5736dadf457b693c30e20d1a2679165ae1155a/deployment_tasks.yaml#L156-L173

On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky 
wrote:

> Hey folks,
>
> Simon P. wrote:
> > 1. Run task X for plugin A (if installed).
> > 2. Run task Y for plugin B (if installed).
> > 3. Run task Z for plugin A (if installed).
>
> Simon, could you please explain do you need this at the first place? I
> can imagine this case only if your two plugins are kinda dependent on
> each other. In this case, it's better to do what was said by Andrew W.
> - set 'Task Y' to require 'Task X' and that requirement will be
> satisfied anyway (even if Task X doesn't exist in the graph).
>
>
> Alex S. wrote:
> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
> > devs could leverage to have tasks executes at specific points in the
> > deploy process.
>
> Yeah, I think that may be useful sometime. However, I'd prefer to
> avoid anchor usage as much as possible. There's no guarantees that
> other plugin didn't make any destructive actions early, that breaks
> you later. Anchors is good way to resolve possible conflicts, but they
> aren't bulletproof.
>
> - igor
>
> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya 
> wrote:
> > On 27.01.2016 14:44, Simon Pasquier wrote:
> >> Hi,
> >>
> >> I see that tasks.yaml is going to be deprecated in the future MOS
> >> versions [1]. I've got one question regarding the ordering of tasks
> >> between different plugins.
> >> With tasks.yaml, it was possible to coordinate the execution of tasks
> >> between plugins without prior knowledge of which plugins were installed
> [2].
> >> For example, lets say we have 2 plugins: A and B. The plugins may or may
> >> not be installed in the same environment and the tasks execution should
> be:
> >> 1. Run task X for plugin A (if installed).
> >> 2. Run task Y for plugin B (if installed).
> >> 3. Run task Z for plugin A (if installed).
> >>
> >> Right now, we can set task priorities like:
> >>
> >> # tasks.yaml for plugin A
> >> - role: ['*']
> >>   stage: post_deployment/1000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_X.pp
> >> puppet_modules: puppet/modules
> >>
> >> - role: ['*']
> >>   stage: post_deployment/3000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Z.pp
> >> puppet_modules: puppet/modules
> >>
> >> # tasks.yaml for plugin B
> >> - role: ['*']
> >>   stage: post_deployment/2000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Y.pp
> >> puppet_modules: puppet/modules
> >>
> >> How would it be handled without tasks.yaml?
> >
> > I created a kinda related bug [0] and submitted a patch [1] to MOS docs
> > [2] to kill some entropy on the topic of tasks schema roles versus
> > groups and using wildcards for basic and custom roles from plugins as
> > well. There is also a fancy picture to clarify things a bit. Would be
> > nice to put more details there about custom stages as well!
> >
> > If plugins are not aware of each other, they cannot be strictly ordered
> > like "to be the very last in the deployment" as one and only shall be
> > so. That is why "coordinating the execution of tasks
> > between plugins without prior knowledge of which plugins were installed"
> > looks very confusing for me. Though, maybe wildcards with 

Re: [openstack-dev] [fuel][plugins][lma] Leveraging OpenStack logstash grok filters in StackLight?

2016-05-17 Thread Simon Pasquier
The short answer is no. StackLight is based on Heka for log processing and
parsing. Heka itself uses Lua Parsing Expression Grammars [1].
For now the patterns are maintained in the LMA collector repository [2] but
it's on our to-do list to have it available in a dedicated repo.
One advantage of having Lua-based parsing is that it's fairly easy to unit
test the patterns.
BR,
Simon

[1] http://www.inf.puc-rio.br/~roberto/lpeg/lpeg.html
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/modules/lma_collector/files/plugins/common/patterns.lua

On Tue, May 17, 2016 at 2:23 PM, Bogdan Dobrelya 
wrote:

> Hi.
> Are there plans to align the StackLight (LMA plugin) [0] with that
> recently announced source of Logstash filters [1]? I found no fast info
> if the plugin supports Logstash input log shippers, so I'm just asking
> as well.
>
> Writing grok filters is... hard, I'd had a sad experience [2] with that
> some time ago, and that is not that I'd like to repeat or maintain on my
> own, so writing those is something definitely should be done
> collaboratively :)
>
> [0] https://launchpad.net/lma-toolchain
> [1] https://github.com/openstack-infra/logstash-filters
> [2] https://goo.gl/bG6EwX
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel - Rack Scale Architecture integration

2016-05-13 Thread Vladimir Kozhukalov
Absolutely agree with Jay. Fuel is a community project.
Please keep discussion public including technical details.

Anyway, integration is welcome, please go ahead and
create BP (btw, spec review request is the best place to
discuss technical details).


Vladimir Kozhukalov

On Fri, May 13, 2016 at 4:43 PM, Jay Pipes  wrote:

> On 05/13/2016 07:26 AM, Andrian Noga wrote:
>
>> Hi Deepti,
>>
>> We have already a vision about Fuel-UI implementation for RSA.
>>
>> I've replied already to you in private email.
>>
>> Let's continue this thread in private conversation.
>>
>
> Please, no.
>
> Fuel is an OpenStack Big Tent project. The Fuel roadmap, feature set, and
> development MUST be discussed in public channels. If not, then Fuel does
> not meet one of the requirements for being in the OpenStack Big Tent, which
> is that the project will abide by the four Opens of OpenStack:
>
> https://wiki.openstack.org/wiki/Open
>
> Please note that Open Design, Open Community and Open Development mean
> that communication about the Fuel roadmap and development must be done on
> public mailing lists and discussion areas. This is a non-negotiable
> condition for being a member project in OpenStack's Big Tent.
>
> Thanks,
> -jay
>
> On Fri, May 13, 2016 at 2:08 PM, > > wrote:
>>
>> Hello Fuel team,
>>
>> I am a software engineer working in the OpenStack team at Intel. You
>> may have heard of Rack Scale Architecture [1] that Intel is
>> pioneering. It is a new data center architecture that "simplifies
>> resource management and provides the ability to dynamically compose
>> resources based on workload-specific demands". It is supported by
>> multiple industry partners.
>>
>> We would like to propose Fuel integration with this. The first step
>> would be UI integration [2] and we would like to have a tab similar
>> to the VMWare tab (whose visibility is controlled by a config flag)
>> that talks to the Redfish API [3] for displaying resources such as
>> pods, racks, etc as exposed by this API. Note that Redfish API is an
>> open industry standard API supported by multiple companies.
>>
>> I plan to write up a blueprint/spec for the same, but I wanted to
>> know if there is any immediate feedback on this idea before I even
>> get started.
>>
>> Thanks,
>> Deepti
>>
>> [1]
>>
>> http://www.intel.com/content/www/us/en/architecture-and-technology/intel-rack-scale-architecture.html
>> [2] http://i.imgur.com/vLJIbwx.jpg
>> [3] https://www.dmtf.org/standards/redfish
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel - Rack Scale Architecture integration

2016-05-13 Thread Jay Pipes

On 05/13/2016 07:26 AM, Andrian Noga wrote:

Hi Deepti,

We have already a vision about Fuel-UI implementation for RSA.

I've replied already to you in private email.

Let's continue this thread in private conversation.


Please, no.

Fuel is an OpenStack Big Tent project. The Fuel roadmap, feature set, 
and development MUST be discussed in public channels. If not, then Fuel 
does not meet one of the requirements for being in the OpenStack Big 
Tent, which is that the project will abide by the four Opens of OpenStack:


https://wiki.openstack.org/wiki/Open

Please note that Open Design, Open Community and Open Development mean 
that communication about the Fuel roadmap and development must be done 
on public mailing lists and discussion areas. This is a non-negotiable 
condition for being a member project in OpenStack's Big Tent.


Thanks,
-jay


On Fri, May 13, 2016 at 2:08 PM, > wrote:

Hello Fuel team,

I am a software engineer working in the OpenStack team at Intel. You
may have heard of Rack Scale Architecture [1] that Intel is
pioneering. It is a new data center architecture that "simplifies
resource management and provides the ability to dynamically compose
resources based on workload-specific demands". It is supported by
multiple industry partners.

We would like to propose Fuel integration with this. The first step
would be UI integration [2] and we would like to have a tab similar
to the VMWare tab (whose visibility is controlled by a config flag)
that talks to the Redfish API [3] for displaying resources such as
pods, racks, etc as exposed by this API. Note that Redfish API is an
open industry standard API supported by multiple companies.

I plan to write up a blueprint/spec for the same, but I wanted to
know if there is any immediate feedback on this idea before I even
get started.

Thanks,
Deepti

[1]

http://www.intel.com/content/www/us/en/architecture-and-technology/intel-rack-scale-architecture.html
[2] http://i.imgur.com/vLJIbwx.jpg
[3] https://www.dmtf.org/standards/redfish




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel - Rack Scale Architecture integration

2016-05-13 Thread Andrian Noga
Hi Deepti,

We have already a vision about Fuel-UI implementation for RSA.

I've replied already to you in private email.

Let's continue this thread in private conversation.

Regards,
Andrian Noga
Engineering Manager
Partner Integration Team,
Mirantis, Inc.
+38 (066) 811-84-12
Skype: bigfoot_ua
www.mirantis.com
an...@mirantis.com

On Fri, May 13, 2016 at 2:08 PM,  wrote:

> Hello Fuel team,
>
> I am a software engineer working in the OpenStack team at Intel. You may
> have heard of Rack Scale Architecture [1] that Intel is pioneering. It is a
> new data center architecture that "simplifies resource management and
> provides the ability to dynamically compose resources based on
> workload-specific demands". It is supported by multiple industry partners.
>
> We would like to propose Fuel integration with this. The first step would
> be UI integration [2] and we would like to have a tab similar to the VMWare
> tab (whose visibility is controlled by a config flag) that talks to the
> Redfish API [3] for displaying resources such as pods, racks, etc as
> exposed by this API. Note that Redfish API is an open industry standard API
> supported by multiple companies.
>
> I plan to write up a blueprint/spec for the same, but I wanted to know if
> there is any immediate feedback on this idea before I even get started.
>
> Thanks,
> Deepti
>
> [1]
> http://www.intel.com/content/www/us/en/architecture-and-technology/intel-rack-scale-architecture.html
> [2] http://i.imgur.com/vLJIbwx.jpg
> [3] https://www.dmtf.org/standards/redfish
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Simon Pasquier
On Thu, May 12, 2016 at 6:13 PM, Alex Schultz  wrote:

>
>
> On Thu, May 12, 2016 at 10:00 AM, Simon Pasquier 
> wrote:
>
>> First of all, I'm +1 on this. But as Matt says, it needs to take care of
>> the plugins.
>> A few examples I know of are the Zabbix plugin [1] and the LMA collector
>> plugin [2] that modify the HAProxy configuration of the controller nodes.
>> How could they work with your patch?
>>
>
> So you are leveraging the haproxy on the controller for this
> configuration? I thought I had asked in irc about this and was under the
> impression that you're using your own haproxy configuration on a different
> host(s).  I'll have to figure out an alternative to support plugin haproxy
> configurations as with that patch it would just ignore those configurations.
>

For other plugins, we use dedicated HAProxy nodes but not for these 2 (at
least).
I admit that it wasn't a very good idea but at that time, it was "oh
perfect, /etc/haproxy/conf.d is there, let's use it!". We'll try to think
about a solution on our end too.

Simon


>
> Thanks,
> -Alex
>
>
>> Simon
>>
>> [1]
>> https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
>> [2]
>> https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81
>>
>> On Thu, May 12, 2016 at 4:42 PM, Alex Schultz 
>> wrote:
>>
>>>
>>>
>>> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn <
>>> mmoses...@mirantis.com> wrote:
>>>
 Hi Alex,

 Collapsing our haproxy tasks makes it a bit trickier for plugin
 developers. We would still be able to control it via hiera, but it
 means more effort for a plugin developer to run haproxy for a given
 set of services, but explicitly exclude all those it doesn't intend to
 run on a custom role. Maybe you can think of some intermediate step
 that wouldn't add a burden to a plugin developer that would want to
 just proxy keystone and mysql, but not nova/neutron/glance/cinder?


>>> So none of the existing logic has changed around the enabling/disabling
>>> of those tasks within hiera.  The logic remains the same as I'm just
>>> including the osnailyfacter::openstack_haproxy::openstack_haproxy_*
>>> classes[0] within the haproxy task.  The only difference is that the task
>>> logic no longer would control if something was included like sahara.
>>>
>>> -Alex
>>>
>>> [0]
>>> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>>>
>>>
 On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
 wrote:
 > Hey Fuelers,
 >
 > We have been using our own fork of the haproxy module within
 fuel-library
 > for some time. This also includes relying on a MOS specific version of
 > haproxy that carries the conf.d hack.  Unfortunately this has meant
 that
 > we've needed to leverage the MOS version of this package when
 deploying with
 > UCA.  As far as I can tell, there is no actual need to continue to do
 this
 > anymore. I have been working on switching to the upstream haproxy
 module[0]
 > so we can drop this custom haproxy package and leverage the upstream
 haproxy
 > module.
 >
 > In order to properly switch to the upstream haproxy module, we need to
 > collapse the haproxy tasks into a single task. With the migration to
 > leveraging classes for task functionality, this is pretty straight
 forward.
 > In my review I have left the old tasks still in place to make sure to
 not
 > break any previous dependencies but they old tasks no longer do
 anything.
 > The next step after this initial merge would be to cleanup the
 haproxy code
 > and extract it from the old openstack module.
 >
 > Please be aware that if you were relying on the conf.d method of
 injecting
 > configurations for haproxy, this will break you. Please speak up now
 so we
 > can figure out an alternative solution.
 >
 > Thanks,
 > -Alex
 >
 >
 > [0] https://review.openstack.org/#/c/307538/
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> 

Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Alex Schultz
On Thu, May 12, 2016 at 10:00 AM, Simon Pasquier 
wrote:

> First of all, I'm +1 on this. But as Matt says, it needs to take care of
> the plugins.
> A few examples I know of are the Zabbix plugin [1] and the LMA collector
> plugin [2] that modify the HAProxy configuration of the controller nodes.
> How could they work with your patch?
>

So you are leveraging the haproxy on the controller for this configuration?
I thought I had asked in irc about this and was under the impression that
you're using your own haproxy configuration on a different host(s).  I'll
have to figure out an alternative to support plugin haproxy configurations
as with that patch it would just ignore those configurations.

Thanks,
-Alex


> Simon
>
> [1]
> https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
> [2]
> https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81
>
> On Thu, May 12, 2016 at 4:42 PM, Alex Schultz 
> wrote:
>
>>
>>
>> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn > > wrote:
>>
>>> Hi Alex,
>>>
>>> Collapsing our haproxy tasks makes it a bit trickier for plugin
>>> developers. We would still be able to control it via hiera, but it
>>> means more effort for a plugin developer to run haproxy for a given
>>> set of services, but explicitly exclude all those it doesn't intend to
>>> run on a custom role. Maybe you can think of some intermediate step
>>> that wouldn't add a burden to a plugin developer that would want to
>>> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>>>
>>>
>> So none of the existing logic has changed around the enabling/disabling
>> of those tasks within hiera.  The logic remains the same as I'm just
>> including the osnailyfacter::openstack_haproxy::openstack_haproxy_*
>> classes[0] within the haproxy task.  The only difference is that the task
>> logic no longer would control if something was included like sahara.
>>
>> -Alex
>>
>> [0]
>> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>>
>>
>>> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
>>> wrote:
>>> > Hey Fuelers,
>>> >
>>> > We have been using our own fork of the haproxy module within
>>> fuel-library
>>> > for some time. This also includes relying on a MOS specific version of
>>> > haproxy that carries the conf.d hack.  Unfortunately this has meant
>>> that
>>> > we've needed to leverage the MOS version of this package when
>>> deploying with
>>> > UCA.  As far as I can tell, there is no actual need to continue to do
>>> this
>>> > anymore. I have been working on switching to the upstream haproxy
>>> module[0]
>>> > so we can drop this custom haproxy package and leverage the upstream
>>> haproxy
>>> > module.
>>> >
>>> > In order to properly switch to the upstream haproxy module, we need to
>>> > collapse the haproxy tasks into a single task. With the migration to
>>> > leveraging classes for task functionality, this is pretty straight
>>> forward.
>>> > In my review I have left the old tasks still in place to make sure to
>>> not
>>> > break any previous dependencies but they old tasks no longer do
>>> anything.
>>> > The next step after this initial merge would be to cleanup the haproxy
>>> code
>>> > and extract it from the old openstack module.
>>> >
>>> > Please be aware that if you were relying on the conf.d method of
>>> injecting
>>> > configurations for haproxy, this will break you. Please speak up now
>>> so we
>>> > can figure out an alternative solution.
>>> >
>>> > Thanks,
>>> > -Alex
>>> >
>>> >
>>> > [0] https://review.openstack.org/#/c/307538/
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Simon Pasquier
First of all, I'm +1 on this. But as Matt says, it needs to take care of
the plugins.
A few examples I know of are the Zabbix plugin [1] and the LMA collector
plugin [2] that modify the HAProxy configuration of the controller nodes.
How could they work with your patch?
Simon

[1]
https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81

On Thu, May 12, 2016 at 4:42 PM, Alex Schultz  wrote:

>
>
> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn 
> wrote:
>
>> Hi Alex,
>>
>> Collapsing our haproxy tasks makes it a bit trickier for plugin
>> developers. We would still be able to control it via hiera, but it
>> means more effort for a plugin developer to run haproxy for a given
>> set of services, but explicitly exclude all those it doesn't intend to
>> run on a custom role. Maybe you can think of some intermediate step
>> that wouldn't add a burden to a plugin developer that would want to
>> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>>
>>
> So none of the existing logic has changed around the enabling/disabling of
> those tasks within hiera.  The logic remains the same as I'm just including
> the osnailyfacter::openstack_haproxy::openstack_haproxy_* classes[0] within
> the haproxy task.  The only difference is that the task logic no longer
> would control if something was included like sahara.
>
> -Alex
>
> [0]
> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>
>
>> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
>> wrote:
>> > Hey Fuelers,
>> >
>> > We have been using our own fork of the haproxy module within
>> fuel-library
>> > for some time. This also includes relying on a MOS specific version of
>> > haproxy that carries the conf.d hack.  Unfortunately this has meant that
>> > we've needed to leverage the MOS version of this package when deploying
>> with
>> > UCA.  As far as I can tell, there is no actual need to continue to do
>> this
>> > anymore. I have been working on switching to the upstream haproxy
>> module[0]
>> > so we can drop this custom haproxy package and leverage the upstream
>> haproxy
>> > module.
>> >
>> > In order to properly switch to the upstream haproxy module, we need to
>> > collapse the haproxy tasks into a single task. With the migration to
>> > leveraging classes for task functionality, this is pretty straight
>> forward.
>> > In my review I have left the old tasks still in place to make sure to
>> not
>> > break any previous dependencies but they old tasks no longer do
>> anything.
>> > The next step after this initial merge would be to cleanup the haproxy
>> code
>> > and extract it from the old openstack module.
>> >
>> > Please be aware that if you were relying on the conf.d method of
>> injecting
>> > configurations for haproxy, this will break you. Please speak up now so
>> we
>> > can figure out an alternative solution.
>> >
>> > Thanks,
>> > -Alex
>> >
>> >
>> > [0] https://review.openstack.org/#/c/307538/
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Alex Schultz
On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn 
wrote:

> Hi Alex,
>
> Collapsing our haproxy tasks makes it a bit trickier for plugin
> developers. We would still be able to control it via hiera, but it
> means more effort for a plugin developer to run haproxy for a given
> set of services, but explicitly exclude all those it doesn't intend to
> run on a custom role. Maybe you can think of some intermediate step
> that wouldn't add a burden to a plugin developer that would want to
> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>
>
So none of the existing logic has changed around the enabling/disabling of
those tasks within hiera.  The logic remains the same as I'm just including
the osnailyfacter::openstack_haproxy::openstack_haproxy_* classes[0] within
the haproxy task.  The only difference is that the task logic no longer
would control if something was included like sahara.

-Alex

[0]
https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp


> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
> wrote:
> > Hey Fuelers,
> >
> > We have been using our own fork of the haproxy module within fuel-library
> > for some time. This also includes relying on a MOS specific version of
> > haproxy that carries the conf.d hack.  Unfortunately this has meant that
> > we've needed to leverage the MOS version of this package when deploying
> with
> > UCA.  As far as I can tell, there is no actual need to continue to do
> this
> > anymore. I have been working on switching to the upstream haproxy
> module[0]
> > so we can drop this custom haproxy package and leverage the upstream
> haproxy
> > module.
> >
> > In order to properly switch to the upstream haproxy module, we need to
> > collapse the haproxy tasks into a single task. With the migration to
> > leveraging classes for task functionality, this is pretty straight
> forward.
> > In my review I have left the old tasks still in place to make sure to not
> > break any previous dependencies but they old tasks no longer do anything.
> > The next step after this initial merge would be to cleanup the haproxy
> code
> > and extract it from the old openstack module.
> >
> > Please be aware that if you were relying on the conf.d method of
> injecting
> > configurations for haproxy, this will break you. Please speak up now so
> we
> > can figure out an alternative solution.
> >
> > Thanks,
> > -Alex
> >
> >
> > [0] https://review.openstack.org/#/c/307538/
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Matthew Mosesohn
Hi Alex,

Collapsing our haproxy tasks makes it a bit trickier for plugin
developers. We would still be able to control it via hiera, but it
means more effort for a plugin developer to run haproxy for a given
set of services, but explicitly exclude all those it doesn't intend to
run on a custom role. Maybe you can think of some intermediate step
that wouldn't add a burden to a plugin developer that would want to
just proxy keystone and mysql, but not nova/neutron/glance/cinder?

On Thu, May 12, 2016 at 5:34 PM, Alex Schultz  wrote:
> Hey Fuelers,
>
> We have been using our own fork of the haproxy module within fuel-library
> for some time. This also includes relying on a MOS specific version of
> haproxy that carries the conf.d hack.  Unfortunately this has meant that
> we've needed to leverage the MOS version of this package when deploying with
> UCA.  As far as I can tell, there is no actual need to continue to do this
> anymore. I have been working on switching to the upstream haproxy module[0]
> so we can drop this custom haproxy package and leverage the upstream haproxy
> module.
>
> In order to properly switch to the upstream haproxy module, we need to
> collapse the haproxy tasks into a single task. With the migration to
> leveraging classes for task functionality, this is pretty straight forward.
> In my review I have left the old tasks still in place to make sure to not
> break any previous dependencies but they old tasks no longer do anything.
> The next step after this initial merge would be to cleanup the haproxy code
> and extract it from the old openstack module.
>
> Please be aware that if you were relying on the conf.d method of injecting
> configurations for haproxy, this will break you. Please speak up now so we
> can figure out an alternative solution.
>
> Thanks,
> -Alex
>
>
> [0] https://review.openstack.org/#/c/307538/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [QA] running Fuel tests using nodepool

2016-05-10 Thread Spencer Krum
As a frequent tinc user I'd be interested to see the code you are using
to manage tinc into doing this. Is that code available somewhere?

On Tue, May 10, 2016, at 09:02 AM, Monty Taylor wrote:
> On 05/10/2016 08:54 AM, Vladimir Eremin wrote:
> > Hi,
> > 
> > I've investigated status of nodepool multi node testing and fuel-qa
> > approaches, and I wanna share my opinion on moving Fuel testing on
> > OpenStack and nodepool.
> 
> Awesome! This is a great writeup - and hopefully will be useful as we
> validate our theory that zuul v3 should provide a richer environment for
> doing complex things like fuel testing than the current multi-node work.
> 
> > Our CI pipeline consists of next stages:
> > 
> > 1. Artifact building and publishing
> > 2. QA jobs:
> > 2.1. Master node installation from ISO
> > 2.2. Slave nodes provisioning
> > 2.3. Software deployment
> > 2.4. Workload verification
> > 
> > Current upstream nodepool limitations are pre-spwaned nodes, small
> > flavors and, only l3 connectivity. Also, we have no PXE booting and VLAN
> > trunking in OpenStack itself. So, the main problem with moving this
> > pipeline to nodepool is to emulate IT tasks: installation from ISO and
> > nodes provisioning.
> > 
> > Actually the point is: to test Fuel and test rest of OpenStack
> > components against Fuel we mostly need to test stage artifact building,
> > deployment and verification. So we need to make Fuel installable from
> > packages and create overlay L2 networking. I've found no unsolvable
> > problems right now to check most of scenarios with this approach.
> 
> 
> 
> > Besides artifact building step, there are next actions items to do to
> > run Fuel QA test:
> > 
> > 1. Automate overlay networking setup. I've
> > used https://www.tinc-vpn.org/ as a L2 switching overlay, but OpenVPN
> > could be tool of choice. Action items:
> >  - overlay networking setup should be integrated in fuel-devops
> 
> There is overlay work in the multi-node stuff for devstack. I believe
> clarkb has a todo-list item to make that networking setup more general
> and more generally available. (it's currently done in devstack-gate
> script) I'm not sure if you saw that or if it's suitable for what you
> need? If not, it would be good to understand deficiencies.
> 
> > 2. Automate Fuel master node codebase installation from packages,
> > including repo adding and deployment. Action items:
> > - installation should be integrated in fuel-devops or nodepool infra
> > - make bootstrap scripts working with more than one network on master
> > node ("Bringing down ALL network interfaces except...")
> > - fix iptables and ssh for underlay networking
> 
> We've talked a few times about handling packages and repos of packages
> for patches that have not yet landed, but have done exactly zero work on
> it. Since you're a concrete use case, perhaps we can design things with
> you in mind.
> 
> > 3. Automate Fuel slave node codebase installation and node enrollment.
> > Action items:
> > - nailgun-agent installation should be integrated in fuel-devops or
> > nodepool infra
> > - mcollective and ssh keys setup should be automated
> > - nailgun ang/or astute should be extended to allow pre-provisioned
> > nodes enrollment (I'm doing this part now)
> > - nailgun-agent and l23network should support overlay network interfaces
> 
> Exciting. I look forward to working on this with you - there are fun
> problems in here. :)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
  Spencer Krum
  n...@spencerkrum.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [QA] running Fuel tests using nodepool

2016-05-10 Thread Monty Taylor
On 05/10/2016 08:54 AM, Vladimir Eremin wrote:
> Hi,
> 
> I've investigated status of nodepool multi node testing and fuel-qa
> approaches, and I wanna share my opinion on moving Fuel testing on
> OpenStack and nodepool.

Awesome! This is a great writeup - and hopefully will be useful as we
validate our theory that zuul v3 should provide a richer environment for
doing complex things like fuel testing than the current multi-node work.

> Our CI pipeline consists of next stages:
> 
> 1. Artifact building and publishing
> 2. QA jobs:
> 2.1. Master node installation from ISO
> 2.2. Slave nodes provisioning
> 2.3. Software deployment
> 2.4. Workload verification
> 
> Current upstream nodepool limitations are pre-spwaned nodes, small
> flavors and, only l3 connectivity. Also, we have no PXE booting and VLAN
> trunking in OpenStack itself. So, the main problem with moving this
> pipeline to nodepool is to emulate IT tasks: installation from ISO and
> nodes provisioning.
> 
> Actually the point is: to test Fuel and test rest of OpenStack
> components against Fuel we mostly need to test stage artifact building,
> deployment and verification. So we need to make Fuel installable from
> packages and create overlay L2 networking. I've found no unsolvable
> problems right now to check most of scenarios with this approach.



> Besides artifact building step, there are next actions items to do to
> run Fuel QA test:
> 
> 1. Automate overlay networking setup. I've
> used https://www.tinc-vpn.org/ as a L2 switching overlay, but OpenVPN
> could be tool of choice. Action items:
>  - overlay networking setup should be integrated in fuel-devops

There is overlay work in the multi-node stuff for devstack. I believe
clarkb has a todo-list item to make that networking setup more general
and more generally available. (it's currently done in devstack-gate
script) I'm not sure if you saw that or if it's suitable for what you
need? If not, it would be good to understand deficiencies.

> 2. Automate Fuel master node codebase installation from packages,
> including repo adding and deployment. Action items:
> - installation should be integrated in fuel-devops or nodepool infra
> - make bootstrap scripts working with more than one network on master
> node ("Bringing down ALL network interfaces except...")
> - fix iptables and ssh for underlay networking

We've talked a few times about handling packages and repos of packages
for patches that have not yet landed, but have done exactly zero work on
it. Since you're a concrete use case, perhaps we can design things with
you in mind.

> 3. Automate Fuel slave node codebase installation and node enrollment.
> Action items:
> - nailgun-agent installation should be integrated in fuel-devops or
> nodepool infra
> - mcollective and ssh keys setup should be automated
> - nailgun ang/or astute should be extended to allow pre-provisioned
> nodes enrollment (I'm doing this part now)
> - nailgun-agent and l23network should support overlay network interfaces

Exciting. I look forward to working on this with you - there are fun
problems in here. :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [QA] running Fuel tests using nodepool

2016-05-10 Thread Jeremy Stanley
On 2016-05-10 15:54:34 +0300 (+0300), Vladimir Eremin wrote:
[...]
> 1. Automate overlay networking setup. I've used
> https://www.tinc-vpn.org/  as a L2
> switching overlay, but OpenVPN could be tool of choice. Action
> items:
> - overlay networking setup should be integrated in fuel-devops
[...]

Just to be sure, you've seen the ovs_vxlan_bridge() implementation
in devstack-gate where we set up an overlay L2 network using
OVS/VXLAN? The same design also works fine with GRE (we used it for
a while but ran into some service providers blocking IP protocol 47
on their LANs).

http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/multinode_setup_info.txt
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n1050
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] One plugin - one Launchpad project

2016-05-04 Thread Sheena Conant
Hi all –



I think this might’ve gotten buried a bit in the pre-summit and summit
madness.



I just wanted to kick the thread – I think this is a really good idea.
Dogpiling all plugins into a single LP project makes it really difficult to
pick out which bugs affect which plugins – and the ecosystem is only
getting bigger.



Irina, please add this to the SDK as a best practice when you have time.
I’ll talk to plugin teams I’m working with to make sure they know about
this, as well.



Sheena



*From:* Irina Povolotskaya [mailto:ipovolotsk...@mirantis.com]
*Sent:* Tuesday, April 19, 2016 9:49 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* [openstack-dev] [Fuel][Plugins] One plugin - one Launchpad
project



Hi to everyone,



as you possibly know (at least, those dev. teams working on their Fuel
plugins) we have a fuel-plugins Launchpad project [1] which serves as
all-in-one entry point for filing bugs, related

to plugin-specific problems.



nevertheless, this single project is a bad idea in terms of providing
granularity and visibility for each plugin:

- it's not possible to make up milestones, unique for every plugin that
would coincide with the plugin's version (which is specified in
metadata.yaml file)

- it's not possible to provide every dev. team with exclusive rights on
managing importance, milestones etc.



therefore, I would like to propose the following:

- if you have your own fuel plugin, create a separate LP project for it
e.g.[2] [3]and make up all corresponding groups for managing release cycle
of your plugin

- if you have some issues with fuel plugin framework itself, please
consider filing bugs in fuel project [4] as usual.



I would appreciate getting feedback on this idea.

if it seems fine, then I'll follow-up with adding instructions into our SDK
[5] and the list of already existing LP projects.



thanks.





[1] https://launchpad.net/fuel-plugins

[2] https://launchpad.net/lma-toolchain

[3] https://launchpad.net/fuel-plugin-nsxv

[4] https://launchpad.net/fuel

[5] https://wiki.openstack.org/wiki/Fuel/Plugins




-- 

Best regards,


Irina Povolotskaya
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] One plugin - one Launchpad project

2016-05-02 Thread Evgeniy L
Hi Irina,

I fully support the idea of creating separate launchpad project for each
plugin, because plugins have different release cycles and different teams
who support them.

Fuel Plugin documentation [2] has to be updated with information for plugin
developers (how to setup new project) and for users (how to create a bug).
Good information on how to create and setup new project can be found here
[1].

Thanks,

[1] http://docs.openstack.org/infra/manual/creators.html
[2] https://wiki.openstack.org/wiki/Fuel/Plugins

On Tue, Apr 19, 2016 at 6:49 PM, Irina Povolotskaya <
ipovolotsk...@mirantis.com> wrote:

> Hi to everyone,
>
> as you possibly know (at least, those dev. teams working on their Fuel
> plugins) we have a fuel-plugins Launchpad project [1] which serves as
> all-in-one entry point for filing bugs, related
> to plugin-specific problems.
>
> nevertheless, this single project is a bad idea in terms of providing
> granularity and visibility for each plugin:
> - it's not possible to make up milestones, unique for every plugin that
> would coincide with the plugin's version (which is specified in
> metadata.yaml file)
> - it's not possible to provide every dev. team with exclusive rights on
> managing importance, milestones etc.
>
> therefore, I would like to propose the following:
> - if you have your own fuel plugin, create a separate LP project for it
> e.g.[2] [3]and make up all corresponding groups for managing release cycle
> of your plugin
> - if you have some issues with fuel plugin framework itself, please
> consider filing bugs in fuel project [4] as usual.
>
> I would appreciate getting feedback on this idea.
> if it seems fine, then I'll follow-up with adding instructions into our
> SDK [5] and the list of already existing LP projects.
>
> thanks.
>
>
> [1] https://launchpad.net/fuel-plugins
> [2] https://launchpad.net/lma-toolchain
> [3] https://launchpad.net/fuel-plugin-nsxv
> [4] https://launchpad.net/fuel
> [5] https://wiki.openstack.org/wiki/Fuel/Plugins
>
>
> --
> Best regards,
>
> Irina Povolotskaya
>
>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread Mike Bayer



On 04/30/2016 10:50 AM, Clint Byrum wrote:

Excerpts from Roman Podoliaka's message of 2016-04-29 12:04:49 -0700:




I'm curious why you think setting wsrep_sync_wait=1 wouldn't help.

The exact example appears in the Galera documentation:

http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep-sync-wait

The moment you say 'SET SESSION wsrep_sync_wait=1', the behavior should
prevent the list problem you see, and it should not matter that it is
a separate session, as that is the entire point of the variable:



we prefer to keep it off and just point applications at a single node 
using master/passive/passive in HAProxy, so that we don't have the 
unnecessary performance hit of waiting for all transactions to 
propagate; we just stick on one node at a time.   We've fixed a lot of 
issues in our config in ensuring that HAProxy definitely keeps all 
clients on exactly one Galera node at a time.




"When you enable this parameter, the node triggers causality checks in
response to certain types of queries. During the check, the node blocks
new queries while the database server catches up with all updates made
in the cluster to the point where the check was begun. Once it reaches
this point, the node executes the original query."

In the active/passive case where you never use the passive node as a
read slave, one could actually set wsrep_sync_wait=1 globally. This will
cause a ton of lag while new queries happen on the new active and old
transactions are still being applied, but that's exactly what you want,
so that when you fail over, nothing proceeds until all writes from the
original active node are applied and available on the new active node.
It would help if your failover technology actually _breaks_ connections
to a presumed dead node, so writes stop happening on the old one.


If HAProxy is failing over from the master, which is no longer 
reachable, to another passive node, which is reachable, that means that 
master is partitioned and will leave the Galera primary component.   It 
also means all current database connections are going to be bounced off, 
which will cause errors for those clients either in the middle of an 
operation, or if a pooled connection is reused before it is known that 
the connection has been reset.  So failover is usually not an error-free 
situation in any case from a database client perspective and retry 
schemes are always going to be needed.


Additionally, the purpose of the enginefacade [1] is to allow Openstack 
applications to fix their often incorrectly written database access 
logic such that in many (most?) cases, a single logical operation is no 
longer unnecessarily split among multiple transactions when possible. 
I know that this is not always feasible in the case where multiple web 
requests are coordinating, however.


That leaves only the very infrequent scenario of, the master has 
finished sending a write set off, the passives haven't finished 
committing that write set, the master goes down and HAProxy fails over 
to one of the passives, and the application that just happens to also be 
connecting fresh onto that new passive node in order to perform the next 
operation that relies upon the previously committed data so it does not 
see a database error, and instead runs straight onto the node where the 
committed data it's expecting hasn't arrived yet.   I can't make the 
judgment for all applications if this scenario can't be handled like any 
other transient error that occurs during a failover situation, however 
if there is such a case, then IMO the wsrep_sync_wait (formerly known as 
wsrep_causal_reads) may be used on a per-transaction basis for that very 
critical, not-retryable-even-during-failover operation.  Allowing this 
variable to be set for the scope of a transaction and reset afterwards, 
and only when talking to Galera, is something we've planned to work into 
the enginefacade as well as an declarative transaction attribute that 
would be a pass-through on other systems.


[1] 
https://specs.openstack.org/openstack/oslo-specs/specs/kilo/make-enginefacade-a-facade.html





Also, If you thrash back and forth a bit, that could cause your app to
virtually freeze, but HAProxy and most other failover technologies allow
tuning timings so that you can stay off of a passive server long enough
to calm it down and fail more gracefully to it.

Anyway, this is why sometimes I do wonder if we'd be better off just
using MySQL with DRBD and good old pacemaker.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread Clint Byrum
Excerpts from Roman Podoliaka's message of 2016-04-29 12:04:49 -0700:
> Hi Bogdan,
> 
> Thank you for sharing this! I'll need to familiarize myself with this
> Jepsen thing, but overall it looks interesting.
> 
> As it turns out, we already run Galera in multi-writer mode in Fuel
> unintentionally in the case, when the active MySQL node goes down,
> HAProxy starts opening connections to a backup, then the active goes
> up again, HAProxy starts opening connections to the original MySQL
> node, but OpenStack services may still have connections opened to the
> backup in their connection pools - so now you may have connections to
> multiple MySQL nodes at the same time, exactly what you wanted to
> avoid by using active/backup in the HAProxy configuration.
> 
> ^ this actually leads to an interesting issue [1], when the DB state
> committed on one node is not immediately available on another one.
> Replication lag can be controlled  via session variables [2], but that
> does not always help: e.g. in [1] Nova first goes to Neutron to create
> a new floating IP, gets 201 (and Neutron actually *commits* the DB
> transaction) and then makes another REST API request to get a list of
> floating IPs by address - the latter can be served by another
> neutron-server, connected to another Galera node, which does not have
> the latest state applied yet due to 'slave lag' - it can happen that
> the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
> here, as it's two different REST API requests, potentially served by
> two different neutron-server instances.
> 

I'm curious why you think setting wsrep_sync_wait=1 wouldn't help.

The exact example appears in the Galera documentation:

http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep-sync-wait

The moment you say 'SET SESSION wsrep_sync_wait=1', the behavior should
prevent the list problem you see, and it should not matter that it is
a separate session, as that is the entire point of the variable:

"When you enable this parameter, the node triggers causality checks in
response to certain types of queries. During the check, the node blocks
new queries while the database server catches up with all updates made
in the cluster to the point where the check was begun. Once it reaches
this point, the node executes the original query."

In the active/passive case where you never use the passive node as a
read slave, one could actually set wsrep_sync_wait=1 globally. This will
cause a ton of lag while new queries happen on the new active and old
transactions are still being applied, but that's exactly what you want,
so that when you fail over, nothing proceeds until all writes from the
original active node are applied and available on the new active node.
It would help if your failover technology actually _breaks_ connections
to a presumed dead node, so writes stop happening on the old one.

Also, If you thrash back and forth a bit, that could cause your app to
virtually freeze, but HAProxy and most other failover technologies allow
tuning timings so that you can stay off of a passive server long enough
to calm it down and fail more gracefully to it.

Anyway, this is why sometimes I do wonder if we'd be better off just
using MySQL with DRBD and good old pacemaker.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread Mike Bayer



On 04/30/2016 02:57 AM, bdobre...@mirantis.com wrote:

Hi Roman.
That's interesting, although’s hard to believe (there is no slave lag in
galera multi master). I can only suggest us to create another jepsen
test to verify exactly scenario you describe. As well as other OpenStack
specific patterns.



There is definitely slave lag in Galera and it can be controlled using 
the wsrep_causal_reads_flag.


Demonstration script, whose results I have confirmed separately using 
Pythons scripts, is at:


https://www.percona.com/blog/2013/03/03/investigating-replication-
latency-in-percona-xtradb-cluster/  




Regards,
Bogdan.

*Od:* Roman Podoliaka 
*Wysłano:* ‎piątek‎, ‎29‎ ‎kwietnia‎ ‎2016 ‎21‎:‎04
*Do:* OpenStack Development Mailing List (not for usage questions)

*DW:* openstack-operat...@lists.openstack.org


Hi Bogdan,

Thank you for sharing this! I'll need to familiarize myself with this
Jepsen thing, but overall it looks interesting.

As it turns out, we already run Galera in multi-writer mode in Fuel
unintentionally in the case, when the active MySQL node goes down,
HAProxy starts opening connections to a backup, then the active goes
up again, HAProxy starts opening connections to the original MySQL
node, but OpenStack services may still have connections opened to the
backup in their connection pools - so now you may have connections to
multiple MySQL nodes at the same time, exactly what you wanted to
avoid by using active/backup in the HAProxy configuration.

^ this actually leads to an interesting issue [1], when the DB state
committed on one node is not immediately available on another one.
Replication lag can be controlled  via session variables [2], but that
does not always help: e.g. in [1] Nova first goes to Neutron to create
a new floating IP, gets 201 (and Neutron actually *commits* the DB
transaction) and then makes another REST API request to get a list of
floating IPs by address - the latter can be served by another
neutron-server, connected to another Galera node, which does not have
the latest state applied yet due to 'slave lag' - it can happen that
the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
here, as it's two different REST API requests, potentially served by
two different neutron-server instances.

Basically, you'd need to *always* wait for the latest state to be
applied before executing any queries, which Galera is trying to avoid
for performance reasons.

Thanks,
Roman

[1] https://bugs.launchpad.net/fuel/+bug/1529937
[2]
http://galeracluster.com/2015/06/achieving-read-after-write-semantics-with-galera/

On Fri, Apr 22, 2016 at 10:42 AM, Bogdan Dobrelya
 wrote:
 > [crossposting to openstack-operat...@lists.openstack.org]
 >
 > Hello.
 > I wrote this paper [0] to demonstrate an approach how we can leverage a
 > Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
 > (DB) or Trove, Tooz DLM and perhaps for any integration projects which
 > rely on distributed systems. Although all tests are yet to be finished,
 > results are quite visible, so I better off share early for a review,
 > discussion and comments.
 >
 > I have similar tests done for the RabbitMQ OCF RA clusterers as well,
 > although have yet wrote a report.
 >
 > PS. I'm sorry for so many tags I placed in the topic header, should I've
 > used just "all" :) ? Have a nice weekends and take care!
 >
 > [0] https://goo.gl/VHyIIE
 >
 > --
 > Best regards,
 > Bogdan Dobrelya,
 > Irc #bogdando
 >
 >
 >
 >
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread bdobrelia
Hi Roman.

That's interesting, although’s hard to believe (there is no slave lag in galera 
multi master). I can only suggest us to create another jepsen test to verify 
exactly scenario you describe. As well as other OpenStack specific patterns.






Regards,

Bogdan.





Od: Roman Podoliaka
Wysłano: ‎piątek‎, ‎29‎ ‎kwietnia‎ ‎2016 ‎21‎:‎04
Do: OpenStack Development Mailing List (not for usage questions)
DW: openstack-operat...@lists.openstack.org





Hi Bogdan,

Thank you for sharing this! I'll need to familiarize myself with this
Jepsen thing, but overall it looks interesting.

As it turns out, we already run Galera in multi-writer mode in Fuel
unintentionally in the case, when the active MySQL node goes down,
HAProxy starts opening connections to a backup, then the active goes
up again, HAProxy starts opening connections to the original MySQL
node, but OpenStack services may still have connections opened to the
backup in their connection pools - so now you may have connections to
multiple MySQL nodes at the same time, exactly what you wanted to
avoid by using active/backup in the HAProxy configuration.

^ this actually leads to an interesting issue [1], when the DB state
committed on one node is not immediately available on another one.
Replication lag can be controlled  via session variables [2], but that
does not always help: e.g. in [1] Nova first goes to Neutron to create
a new floating IP, gets 201 (and Neutron actually *commits* the DB
transaction) and then makes another REST API request to get a list of
floating IPs by address - the latter can be served by another
neutron-server, connected to another Galera node, which does not have
the latest state applied yet due to 'slave lag' - it can happen that
the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
here, as it's two different REST API requests, potentially served by
two different neutron-server instances.

Basically, you'd need to *always* wait for the latest state to be
applied before executing any queries, which Galera is trying to avoid
for performance reasons.

Thanks,
Roman

[1] https://bugs.launchpad.net/fuel/+bug/1529937
[2] 
http://galeracluster.com/2015/06/achieving-read-after-write-semantics-with-galera/

On Fri, Apr 22, 2016 at 10:42 AM, Bogdan Dobrelya
 wrote:
> [crossposting to openstack-operat...@lists.openstack.org]
>
> Hello.
> I wrote this paper [0] to demonstrate an approach how we can leverage a
> Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
> (DB) or Trove, Tooz DLM and perhaps for any integration projects which
> rely on distributed systems. Although all tests are yet to be finished,
> results are quite visible, so I better off share early for a review,
> discussion and comments.
>
> I have similar tests done for the RabbitMQ OCF RA clusterers as well,
> although have yet wrote a report.
>
> PS. I'm sorry for so many tags I placed in the topic header, should I've
> used just "all" :) ? Have a nice weekends and take care!
>
> [0] https://goo.gl/VHyIIE
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-29 Thread Roman Podoliaka
Hi Bogdan,

Thank you for sharing this! I'll need to familiarize myself with this
Jepsen thing, but overall it looks interesting.

As it turns out, we already run Galera in multi-writer mode in Fuel
unintentionally in the case, when the active MySQL node goes down,
HAProxy starts opening connections to a backup, then the active goes
up again, HAProxy starts opening connections to the original MySQL
node, but OpenStack services may still have connections opened to the
backup in their connection pools - so now you may have connections to
multiple MySQL nodes at the same time, exactly what you wanted to
avoid by using active/backup in the HAProxy configuration.

^ this actually leads to an interesting issue [1], when the DB state
committed on one node is not immediately available on another one.
Replication lag can be controlled  via session variables [2], but that
does not always help: e.g. in [1] Nova first goes to Neutron to create
a new floating IP, gets 201 (and Neutron actually *commits* the DB
transaction) and then makes another REST API request to get a list of
floating IPs by address - the latter can be served by another
neutron-server, connected to another Galera node, which does not have
the latest state applied yet due to 'slave lag' - it can happen that
the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
here, as it's two different REST API requests, potentially served by
two different neutron-server instances.

Basically, you'd need to *always* wait for the latest state to be
applied before executing any queries, which Galera is trying to avoid
for performance reasons.

Thanks,
Roman

[1] https://bugs.launchpad.net/fuel/+bug/1529937
[2] 
http://galeracluster.com/2015/06/achieving-read-after-write-semantics-with-galera/

On Fri, Apr 22, 2016 at 10:42 AM, Bogdan Dobrelya
 wrote:
> [crossposting to openstack-operat...@lists.openstack.org]
>
> Hello.
> I wrote this paper [0] to demonstrate an approach how we can leverage a
> Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
> (DB) or Trove, Tooz DLM and perhaps for any integration projects which
> rely on distributed systems. Although all tests are yet to be finished,
> results are quite visible, so I better off share early for a review,
> discussion and comments.
>
> I have similar tests done for the RabbitMQ OCF RA clusterers as well,
> although have yet wrote a report.
>
> PS. I'm sorry for so many tags I placed in the topic header, should I've
> used just "all" :) ? Have a nice weekends and take care!
>
> [0] https://goo.gl/VHyIIE
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel 9.0 is released

2016-04-27 Thread Roman Prykhodchenko
Jeremy,

Thanks for checking! Probably that is missing in the the release checklist.

- romcheg
> 26 квіт. 2016 р. о 17:35 Jeremy Stanley  написав(ла):
> 
> On 2016-04-26 17:29:40 +0200 (+0200), Roman Prykhodchenko wrote:
>> I still don’t see python-fuelclient-9.0.0 on PyPi: 
>> https://pypi.python.org/pypi/python-fuelclient 
>> 
>> 
>> Shouldn’t someone investigate this?
> 
> It hasn't been tagged yet as far as I can tell (no 9.0.0 in the git
> repo for openstack/python-fuelclient).
> --
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel 9.0 is released

2016-04-26 Thread Jeremy Stanley
On 2016-04-26 17:29:40 +0200 (+0200), Roman Prykhodchenko wrote:
> I still don’t see python-fuelclient-9.0.0 on PyPi: 
> https://pypi.python.org/pypi/python-fuelclient 
> 
> 
> Shouldn’t someone investigate this?

It hasn't been tagged yet as far as I can tell (no 9.0.0 in the git
repo for openstack/python-fuelclient).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel 9.0 is released

2016-04-26 Thread Roman Prykhodchenko
I still don’t see python-fuelclient-9.0.0 on PyPi: 
https://pypi.python.org/pypi/python-fuelclient 


Shouldn’t someone investigate this?

> 25 квіт. 2016 р. о 18:33 Daniele Pizzolli  
> написав(ла):
> 
> On Mon, Apr 25 2016, you wrote:
> 
>> Can we support alternative way to download ISO since p2p may be prevented in 
>> some company IT?
> 
> Hello,
> 
> It is supported... but not advertised. I am not sure if this is by
> purpose (maybe because in http there is no additional checksum, see
> later for offline verification). For example to download:
> 
> fuel-community-9.0.iso.torrent
> 
> you can use http:
> 
> http://seed-cz1.fuel-infra.org/fuelweb-community-release/fuel-community-9.0.iso
> http://seed-us1.fuel-infra.org/fuelweb-community-release/fuel-community-9.0.iso
> 
> You can get the links by yourself, by getting the torrent and for
> example using:
> 
> set -- fuel-community-9.0.iso.torrent
> aria2c --show-files -- "$1" \
>| awk '/^[a-zA-Z]/{p=0};/^URL List:/{q=1};/^ http/{p=1};p&{print $1}'
> 
> Sorry for the heavy awk usage... I do not know a simple way to print
> them!  Maybe some bittorrent client has an option for that.
> 
> If you are able to get the torrent file, you can also verify the
> checksum off line, for example by using btcheck.
> 
>> 
>> Thanks,
> 
> Best,
> Daniele
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-25 Thread Vladimir Kozhukalov
​Done.

Wed 11:00-11:40 Finalizing of HA reference architecture with event-based
control and fencing
Thu 11:00-11:40 Fuel UI Modularization approaches discussion

Vladimir Kozhukalov

On Mon, Apr 25, 2016 at 10:14 PM, Vladimir Kuklin 
wrote:

> Fuelers
>
> I am OK with the proposed change
> 21 апр. 2016 г. 12:34 пользователь "Vitaly Kramskikh" <
> vkramsk...@mirantis.com> написал:
>
> Folks,
>>
>> I'd like to request workroom sessions swap.
>>
>> I planned to lead a discussion of Fuel UI modularization on Wed
>> 11.00-11.40, but at the same time there will be discussion of handling JS
>> dependencies of Horizon which I'd really like to attend.
>>
>> So I request to swap my discussion with discussion of finalizing of HA
>> reference architecture with event-based control and fencing led by V.
>> Kuklin on Thu 11.00-11.40.
>>
>> Do you have any objections?
>>
>> 2016-04-14 17:55 GMT+03:00 Alexey Shtokolov :
>>
>>> Hi, +1 from my side.
>>>
>>> ---
>>> WBR, Alexey Shtokolov
>>>
>>> 2016-04-14 16:47 GMT+03:00 Evgeniy L :
>>>
 Hi, no problem from my side.

 On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I'd like to request workrooms sessions swap.
>
> We have a session about Fuel/Ironic integration and I'd like
> this session not to overlap with Ironic sessions, so Ironic
> team could attend Fuel sessions. At the same time, we have
> a session about orchestration engine and it would be great to
> invite there people from Mistral and Heat.
>
> My suggestion is as follows:
>
> Wed:
> 9:50 Astute -> Mistral/Heat/???
> Thu:
> 9.00 Fuel/Ironic/Ironic-inspector
>
> If there are any objections, please let me know asap.
>
>
>
> Vladimir Kozhukalov
>
> On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> Looks like we have final version sessions layout [1]
>> for Austin design summit. We have 3 fishbows,
>> 11 workrooms, full day meetup.
>>
>> Here you can find some useful information about design
>> summit [2]. All session leads must read this page,
>> be prepared for their sessions (agenda, slides if needed,
>> etherpads for collaborative work, etc.) and follow
>> the recommendations given in "At the Design Summit" section.
>>
>> Here is Fuel session planning etherpad [3]. Almost all suggested
>> topics have been put there. Please put links to slide decks
>> and etherpads next to respective sessions. Here is the
>> page [4] where other teams publish their planning pads.
>>
>> If session leads want for some reason to swap their slots it must
>> be requested in this ML thread. If for some reason session lead
>> can not lead his/her session, it must be announced in this ML thread.
>>
>> Fuel sessions are:
>> ===
>> Fishbowls:
>> ===
>> Wed:
>> 15:30-16:10
>> 16:30:17:10
>> 17:20-18:00
>>
>> ===
>> Workrooms:
>> ===
>> Wed:
>> 9:00-9:40
>> 9:50-10:30
>> 11:00-11:40
>> 11:50-12:30
>> 13:50-14:30
>> 14:40-15:20
>> Thu:
>> 9:00-9:40
>> 9:50-10:30
>> 11:00-11:40
>> 11:50-12:30
>> 13:30-14:10
>>
>> ===
>> Meetup:
>> ===
>> Fri:
>> 9:00-12:30
>> 14:00-17:30
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
>> [2] https://wiki.openstack.org/wiki/Design_Summit
>> [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
>> [4] https://wiki.openstack.org/wiki/Design_Summit/Planning
>>
>> Thanks.
>>
>> Vladimir Kozhukalov
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Vitaly Kramskikh,
>> Fuel UI Tech Lead,
>> Mirantis, Inc.
>>
>

Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-25 Thread Vladimir Kuklin
Fuelers

I am OK with the proposed change
21 апр. 2016 г. 12:34 пользователь "Vitaly Kramskikh" <
vkramsk...@mirantis.com> написал:

> Folks,
>
> I'd like to request workroom sessions swap.
>
> I planned to lead a discussion of Fuel UI modularization on Wed
> 11.00-11.40, but at the same time there will be discussion of handling JS
> dependencies of Horizon which I'd really like to attend.
>
> So I request to swap my discussion with discussion of finalizing of HA
> reference architecture with event-based control and fencing led by V.
> Kuklin on Thu 11.00-11.40.
>
> Do you have any objections?
>
> 2016-04-14 17:55 GMT+03:00 Alexey Shtokolov :
>
>> Hi, +1 from my side.
>>
>> ---
>> WBR, Alexey Shtokolov
>>
>> 2016-04-14 16:47 GMT+03:00 Evgeniy L :
>>
>>> Hi, no problem from my side.
>>>
>>> On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 I'd like to request workrooms sessions swap.

 We have a session about Fuel/Ironic integration and I'd like
 this session not to overlap with Ironic sessions, so Ironic
 team could attend Fuel sessions. At the same time, we have
 a session about orchestration engine and it would be great to
 invite there people from Mistral and Heat.

 My suggestion is as follows:

 Wed:
 9:50 Astute -> Mistral/Heat/???
 Thu:
 9.00 Fuel/Ironic/Ironic-inspector

 If there are any objections, please let me know asap.



 Vladimir Kozhukalov

 On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> Looks like we have final version sessions layout [1]
> for Austin design summit. We have 3 fishbows,
> 11 workrooms, full day meetup.
>
> Here you can find some useful information about design
> summit [2]. All session leads must read this page,
> be prepared for their sessions (agenda, slides if needed,
> etherpads for collaborative work, etc.) and follow
> the recommendations given in "At the Design Summit" section.
>
> Here is Fuel session planning etherpad [3]. Almost all suggested
> topics have been put there. Please put links to slide decks
> and etherpads next to respective sessions. Here is the
> page [4] where other teams publish their planning pads.
>
> If session leads want for some reason to swap their slots it must
> be requested in this ML thread. If for some reason session lead
> can not lead his/her session, it must be announced in this ML thread.
>
> Fuel sessions are:
> ===
> Fishbowls:
> ===
> Wed:
> 15:30-16:10
> 16:30:17:10
> 17:20-18:00
>
> ===
> Workrooms:
> ===
> Wed:
> 9:00-9:40
> 9:50-10:30
> 11:00-11:40
> 11:50-12:30
> 13:50-14:30
> 14:40-15:20
> Thu:
> 9:00-9:40
> 9:50-10:30
> 11:00-11:40
> 11:50-12:30
> 13:30-14:10
>
> ===
> Meetup:
> ===
> Fri:
> 9:00-12:30
> 14:00-17:30
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
> [2] https://wiki.openstack.org/wiki/Design_Summit
> [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
> [4] https://wiki.openstack.org/wiki/Design_Summit/Planning
>
> Thanks.
>
> Vladimir Kozhukalov
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel 9.0 is released

2016-04-25 Thread Daniele Pizzolli
On Mon, Apr 25 2016, you wrote:

> Can we support alternative way to download ISO since p2p may be prevented in 
> some company IT?

Hello,

It is supported... but not advertised. I am not sure if this is by
purpose (maybe because in http there is no additional checksum, see
later for offline verification). For example to download:

fuel-community-9.0.iso.torrent 

you can use http:

http://seed-cz1.fuel-infra.org/fuelweb-community-release/fuel-community-9.0.iso
http://seed-us1.fuel-infra.org/fuelweb-community-release/fuel-community-9.0.iso

You can get the links by yourself, by getting the torrent and for
example using:

set -- fuel-community-9.0.iso.torrent
aria2c --show-files -- "$1" \
| awk '/^[a-zA-Z]/{p=0};/^URL List:/{q=1};/^ http/{p=1};p&{print $1}'

Sorry for the heavy awk usage... I do not know a simple way to print
them!  Maybe some bittorrent client has an option for that.

If you are able to get the torrent file, you can also verify the
checksum off line, for example by using btcheck.

>
> Thanks,

Best,
Daniele

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-25 Thread Ilya Kutukov
You are welcome!

On Sat, Apr 23, 2016 at 11:00 AM, Guillaume Thouvenin 
wrote:

> Yes this patch fixes the issue.
> Thanks Ilya.
>
> On Fri, Apr 22, 2016 at 4:53 PM, Ilya Kutukov 
> wrote:
>
>> Hello!
>>
>> I think your problem is related to the:
>> https://bugs.launchpad.net/fuel/+bug/1570846
>>
>> Fix to stable/mitaka was commited 20/04/2016
>> https://review.openstack.org/#/c/307658/
>>
>> Could you, please, try to apply this patch and reply does it help or not.
>>
>> On Fri, Apr 22, 2016 at 5:40 PM, Guillaume Thouvenin 
>> wrote:
>>
>>> Hello,
>>>
>>> deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
>>> task definition:
>>>
>>> - id: lma-aggregator
>>>   type: puppet
>>>   version: 2.0.0
>>>   requires: [lma-base]
>>>   required_for: [post_deployment_end]
>>>   role: '*'
>>>   parameters:
>>> puppet_manifest: puppet/manifests/aggregator.pp
>>> puppet_modules: puppet/modules:/etc/puppet/modules
>>> timeout: 600
>>>
>>> It works well with MOS 8. Unfortunately it doesn't work anymore with MOS
>>> 9: the task doesn't appear in the deployment graph. The regression seems to
>>> be introduced by the computable-task-fields-yaql feature [1].
>>>
>>> We could use "roles: ['/.*/']" instead of "role: '*' " but then the task
>>> is skipped when using MOS 8. We also tried to declare both "roles" and
>>> "role" but again this doesn't work.
>>>
>>> How can we ensure that the same version of the plugin can be deployed on
>>> both versions of MOS? Obviously maintaining one Git branch per MOS release
>>> is not an option.
>>>
>>> [1] https://review.openstack.org/#/c/296414/
>>>
>>> Regards,
>>> Guillaume
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel 9.0 is released

2016-04-25 Thread Guo, Ruijing
Can we support alternative way to download ISO since p2p may be prevented in 
some company IT?

Thanks,
-Ruijing

From: Vladimir Kozhukalov [mailto:vkozhuka...@mirantis.com]
Sent: Thursday, April 21, 2016 10:52 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Fuel] Fuel 9.0 is released


Dear all,


I am glad to announce Mitaka release of Fuel (a.k.a Fuel 9.0) - deployment

and lifecycle management tool for OpenStack.


This release introduces support for OpenStack Mitaka and adds

a number of new features and enhancements.


Some highlights:
- Support lifecycle management operations (a.k.a ‘day 2’ operations).
Now cluster settings tab on UI is unlocked after deployment
(cluster configuration could be changed). [1]
- Support of custom deployment graphs. Default deployment graph
could be overridden either by plugins or by a user. [2]
- Support of DPDK capabilities [3]
- Support of Huge Pages capabilities [4]
- Support of CPU pinning (NUMA) capabilities [5]
- Support of QoS capabilities [6]
- Support of SR-IOV capabilities [7]
- Support of multipath devices [8]
- Support of deployment using UCA packages [9]

Please be aware that it is not intended for production use and

there are still about 90 known High bugs [10]. We are planning

to address them all in Fuel 9.0.1 release which is scheduled

for late June [11].


We are looking forward to your feedback.
Great work, Fuel team. Thanks to everyone.


[1] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/unlock-settings-tab.rst

[2] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/execute-custom-graph.rst

[1] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-dpdk.rst

[3] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/execute-custom-graph.rst
[4] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-hugepages.rst
[5] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-numa-cpu-pinning.rst
[6] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-qos.rst
[7] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-sriov.rst
[8] 
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/fc-multipath-disks.rst
[9] https://blueprints.launchpad.net/fuel/+spec/deploy-with-uca-packages

[10] https://goo.gl/qXfrhQ

[11] https://wiki.openstack.org/wiki/Fuel/9.0_Release_Schedule


Learn more about Fuel:
https://wiki.openstack.org/wiki/Fuel

How we work:
https://wiki.openstack.org/wiki/Fuel/How_to_contribute

Specs for features in 9.0 and other Fuel releases:
http://specs.openstack.org/openstack/fuel-specs/

ISO image:
http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-9.0.iso.torrent

Test results of the release build:
https://ci.fuel-infra.org/job/9.0-community.test_all/61/

Documentation:
http://docs.openstack.org/developer/fuel-docs/


RPM packages:
http://mirror.fuel-infra.org/mos-repos/centos/mos9.0-centos7/

DEB packages:
http://mirror.fuel-infra.org/mos-repos/ubuntu/9.0/
Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-23 Thread Guillaume Thouvenin
Yes this patch fixes the issue.
Thanks Ilya.

On Fri, Apr 22, 2016 at 4:53 PM, Ilya Kutukov  wrote:

> Hello!
>
> I think your problem is related to the:
> https://bugs.launchpad.net/fuel/+bug/1570846
>
> Fix to stable/mitaka was commited 20/04/2016
> https://review.openstack.org/#/c/307658/
>
> Could you, please, try to apply this patch and reply does it help or not.
>
> On Fri, Apr 22, 2016 at 5:40 PM, Guillaume Thouvenin 
> wrote:
>
>> Hello,
>>
>> deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
>> task definition:
>>
>> - id: lma-aggregator
>>   type: puppet
>>   version: 2.0.0
>>   requires: [lma-base]
>>   required_for: [post_deployment_end]
>>   role: '*'
>>   parameters:
>> puppet_manifest: puppet/manifests/aggregator.pp
>> puppet_modules: puppet/modules:/etc/puppet/modules
>> timeout: 600
>>
>> It works well with MOS 8. Unfortunately it doesn't work anymore with MOS
>> 9: the task doesn't appear in the deployment graph. The regression seems to
>> be introduced by the computable-task-fields-yaql feature [1].
>>
>> We could use "roles: ['/.*/']" instead of "role: '*' " but then the task
>> is skipped when using MOS 8. We also tried to declare both "roles" and
>> "role" but again this doesn't work.
>>
>> How can we ensure that the same version of the plugin can be deployed on
>> both versions of MOS? Obviously maintaining one Git branch per MOS release
>> is not an option.
>>
>> [1] https://review.openstack.org/#/c/296414/
>>
>> Regards,
>> Guillaume
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Ci] Re-trigger by keyword in comment

2016-04-22 Thread Aleksey Kasatkin
Matthew,

It's great that we have this test. But why to rerun it for no changes?
It was executed only when a new patch was published for CR or CR was
rebased.
Now it is executed on "fuel:recheck".

Agree, adding a plugin would be helpful.



Aleksey Kasatkin


On Fri, Apr 22, 2016 at 4:19 PM, Matthew Mosesohn 
wrote:

> Aleksey, actually I want to extend the test group we run there. Many
> changes coming out of nailgun are actually creating BVT failures that
> can only be prevented by such tests. One such extension would be
> adding a plugin to the deployment to ensure that basic plugins are
> still deployable.
>
> I'm ok with tweaking recheck flags, but we should not try to avoid
> using the CI that saves us from regressions.
>
> On Fri, Apr 22, 2016 at 3:43 PM, Aleksey Kasatkin
>  wrote:
> > Hi Dmitry,
> >
> > Thank you for update.
> > Is it intended that master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> job
> > for code requests to fuel-web runs at every recheck now?
> > Before the change it was executed for new patch/rebase only.
> > Its run takes about 1.5 hour and there is little sense to run it more
> than
> > once for the same patch.
> >
> > Thanks,
> >
> >
> >
> > Aleksey Kasatkin
> >
> >
> > On Fri, Apr 22, 2016 at 10:59 AM, Dmitry Kaiharodsev
> >  wrote:
> >>
> >> Hi to all,
> >>
> >> please be informed that recently we've merged a patch[0]
> >> that allow to re-trigger fuel-ci[1] tests by commenting review with
> >> keywords "fuel: recheck"[2]
> >>
> >> For now actual list of Jenkins jobs with retrigger by "fuel: recheck"[2]
> >> keyword looks like:
> >>
> >> 7.0.verify-python-fuelclient
> >> 8.0.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> >> 8.0.fuel-library.pkgs.ubuntu.smoke_neutron
> >> 8.0.verify-docker-fuel-web-ui
> >> 8.0.verify-fuel-web
> >> 8.0.verify-fuel-web-ui
> >> fuellib_noop_tests
> >> master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> >> master.fuel-astute.pkgs.ubuntu.review_astute_patched
> >> master.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> >> master.fuel-library.pkgs.ubuntu.smoke_neutron
> >> master.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> >> master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> >> master.python-fuelclient.pkgs.ubuntu.review_fuel_client
> >> mitaka.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> >> mitaka.fuel-astute.pkgs.ubuntu.review_astute_patched
> >> mitaka.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> >> mitaka.fuel-library.pkgs.ubuntu.smoke_neutron
> >> mitaka.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> >> mitaka.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> >> mitaka.python-fuelclient.pkgs.ubuntu.review_fuel_client
> >> old.verify-nailgun_performance_tests
> >> verify-fuel-astute
> >> verify-fuel-devops
> >> verify-fuel-docs
> >> verify-fuel-library-bats-tests
> >> verify-fuel-library-puppetfile
> >> verify-fuel-library-python
> >> verify-fuel-library-tasks
> >> verify-fuel-nailgun-agent
> >> verify-fuel-plugins
> >> verify-fuel-qa-docs
> >> verify-fuel-stats
> >> verify-fuel-ui-on-fuel-web
> >> verify-fuel-web-docs
> >> verify-fuel-web-on-fuel-ui
> >> verify-nailgun_performance_tests
> >> verify-puppet-modules.lint
> >> verify-puppet-modules.syntax
> >> verify-puppet-modules.unit
> >> verify-python-fuelclient
> >> verify-python-fuelclient-on-fuel-web
> >> verify-sandbox
> >>
> >>
> >> [0] https://review.fuel-infra.org/#/c/17916/
> >> [1] https://ci.fuel-infra.org/
> >> [2] without quotes
> >> --
> >> Kind Regards,
> >> Dmitry Kaigarodtsev
> >> Mirantis, Inc.
> >>
> >> +38 (093) 522-09-79 (mobile)
> >> +38 (057) 728-4214 (office)
> >> Skype: d1mas85
> >>
> >> 38, Lenin avenue
> >> Kharkov, Ukraine
> >> www.mirantis.com
> >> www.mirantis.ru
> >> dkaiharod...@mirantis.com
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-22 Thread Simon Pasquier
Thanks Ilya! We're testing and will be reporting back on monday.
Simon

On Fri, Apr 22, 2016 at 4:53 PM, Ilya Kutukov  wrote:

> Hello!
>
> I think your problem is related to the:
> https://bugs.launchpad.net/fuel/+bug/1570846
>
> Fix to stable/mitaka was commited 20/04/2016
> https://review.openstack.org/#/c/307658/
>
> Could you, please, try to apply this patch and reply does it help or not.
>
> On Fri, Apr 22, 2016 at 5:40 PM, Guillaume Thouvenin 
> wrote:
>
>> Hello,
>>
>> deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
>> task definition:
>>
>> - id: lma-aggregator
>>   type: puppet
>>   version: 2.0.0
>>   requires: [lma-base]
>>   required_for: [post_deployment_end]
>>   role: '*'
>>   parameters:
>> puppet_manifest: puppet/manifests/aggregator.pp
>> puppet_modules: puppet/modules:/etc/puppet/modules
>> timeout: 600
>>
>> It works well with MOS 8. Unfortunately it doesn't work anymore with MOS
>> 9: the task doesn't appear in the deployment graph. The regression seems to
>> be introduced by the computable-task-fields-yaql feature [1].
>>
>> We could use "roles: ['/.*/']" instead of "role: '*' " but then the task
>> is skipped when using MOS 8. We also tried to declare both "roles" and
>> "role" but again this doesn't work.
>>
>> How can we ensure that the same version of the plugin can be deployed on
>> both versions of MOS? Obviously maintaining one Git branch per MOS release
>> is not an option.
>>
>> [1] https://review.openstack.org/#/c/296414/
>>
>> Regards,
>> Guillaume
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-22 Thread Ilya Kutukov
Hello!

I think your problem is related to the:
https://bugs.launchpad.net/fuel/+bug/1570846

Fix to stable/mitaka was commited 20/04/2016
https://review.openstack.org/#/c/307658/

Could you, please, try to apply this patch and reply does it help or not.

On Fri, Apr 22, 2016 at 5:40 PM, Guillaume Thouvenin 
wrote:

> Hello,
>
> deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
> task definition:
>
> - id: lma-aggregator
>   type: puppet
>   version: 2.0.0
>   requires: [lma-base]
>   required_for: [post_deployment_end]
>   role: '*'
>   parameters:
> puppet_manifest: puppet/manifests/aggregator.pp
> puppet_modules: puppet/modules:/etc/puppet/modules
> timeout: 600
>
> It works well with MOS 8. Unfortunately it doesn't work anymore with MOS
> 9: the task doesn't appear in the deployment graph. The regression seems to
> be introduced by the computable-task-fields-yaql feature [1].
>
> We could use "roles: ['/.*/']" instead of "role: '*' " but then the task
> is skipped when using MOS 8. We also tried to declare both "roles" and
> "role" but again this doesn't work.
>
> How can we ensure that the same version of the plugin can be deployed on
> both versions of MOS? Obviously maintaining one Git branch per MOS release
> is not an option.
>
> [1] https://review.openstack.org/#/c/296414/
>
> Regards,
> Guillaume
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Ci] Re-trigger by keyword in comment

2016-04-22 Thread Matthew Mosesohn
Aleksey, actually I want to extend the test group we run there. Many
changes coming out of nailgun are actually creating BVT failures that
can only be prevented by such tests. One such extension would be
adding a plugin to the deployment to ensure that basic plugins are
still deployable.

I'm ok with tweaking recheck flags, but we should not try to avoid
using the CI that saves us from regressions.

On Fri, Apr 22, 2016 at 3:43 PM, Aleksey Kasatkin
 wrote:
> Hi Dmitry,
>
> Thank you for update.
> Is it intended that master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy job
> for code requests to fuel-web runs at every recheck now?
> Before the change it was executed for new patch/rebase only.
> Its run takes about 1.5 hour and there is little sense to run it more than
> once for the same patch.
>
> Thanks,
>
>
>
> Aleksey Kasatkin
>
>
> On Fri, Apr 22, 2016 at 10:59 AM, Dmitry Kaiharodsev
>  wrote:
>>
>> Hi to all,
>>
>> please be informed that recently we've merged a patch[0]
>> that allow to re-trigger fuel-ci[1] tests by commenting review with
>> keywords "fuel: recheck"[2]
>>
>> For now actual list of Jenkins jobs with retrigger by "fuel: recheck"[2]
>> keyword looks like:
>>
>> 7.0.verify-python-fuelclient
>> 8.0.fuel-library.pkgs.ubuntu.neutron_vlan_ha
>> 8.0.fuel-library.pkgs.ubuntu.smoke_neutron
>> 8.0.verify-docker-fuel-web-ui
>> 8.0.verify-fuel-web
>> 8.0.verify-fuel-web-ui
>> fuellib_noop_tests
>> master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
>> master.fuel-astute.pkgs.ubuntu.review_astute_patched
>> master.fuel-library.pkgs.ubuntu.neutron_vlan_ha
>> master.fuel-library.pkgs.ubuntu.smoke_neutron
>> master.fuel-ostf.pkgs.ubuntu.gate_ostf_update
>> master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
>> master.python-fuelclient.pkgs.ubuntu.review_fuel_client
>> mitaka.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
>> mitaka.fuel-astute.pkgs.ubuntu.review_astute_patched
>> mitaka.fuel-library.pkgs.ubuntu.neutron_vlan_ha
>> mitaka.fuel-library.pkgs.ubuntu.smoke_neutron
>> mitaka.fuel-ostf.pkgs.ubuntu.gate_ostf_update
>> mitaka.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
>> mitaka.python-fuelclient.pkgs.ubuntu.review_fuel_client
>> old.verify-nailgun_performance_tests
>> verify-fuel-astute
>> verify-fuel-devops
>> verify-fuel-docs
>> verify-fuel-library-bats-tests
>> verify-fuel-library-puppetfile
>> verify-fuel-library-python
>> verify-fuel-library-tasks
>> verify-fuel-nailgun-agent
>> verify-fuel-plugins
>> verify-fuel-qa-docs
>> verify-fuel-stats
>> verify-fuel-ui-on-fuel-web
>> verify-fuel-web-docs
>> verify-fuel-web-on-fuel-ui
>> verify-nailgun_performance_tests
>> verify-puppet-modules.lint
>> verify-puppet-modules.syntax
>> verify-puppet-modules.unit
>> verify-python-fuelclient
>> verify-python-fuelclient-on-fuel-web
>> verify-sandbox
>>
>>
>> [0] https://review.fuel-infra.org/#/c/17916/
>> [1] https://ci.fuel-infra.org/
>> [2] without quotes
>> --
>> Kind Regards,
>> Dmitry Kaigarodtsev
>> Mirantis, Inc.
>>
>> +38 (093) 522-09-79 (mobile)
>> +38 (057) 728-4214 (office)
>> Skype: d1mas85
>>
>> 38, Lenin avenue
>> Kharkov, Ukraine
>> www.mirantis.com
>> www.mirantis.ru
>> dkaiharod...@mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Ci] Re-trigger by keyword in comment

2016-04-22 Thread Aleksey Kasatkin
Hi Dmitry,

Thank you for update.
Is it intended that master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy

job for code requests to fuel-web runs at every recheck now?
Before the change it was executed for new patch/rebase only.
Its run takes about 1.5 hour and there is little sense to run it more than
once for the same patch.

Thanks,



Aleksey Kasatkin


On Fri, Apr 22, 2016 at 10:59 AM, Dmitry Kaiharodsev <
dkaiharod...@mirantis.com> wrote:

> Hi to all,
>
> please be informed that recently we've merged a patch[0]
> that allow to re-trigger fuel-ci[1] tests by commenting review with
> keywords "fuel: recheck"[2]
>
> For now actual list of Jenkins jobs with retrigger by "fuel: recheck"[2]
> keyword looks like:
>
> 7.0.verify-python-fuelclient
> 8.0.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> 8.0.fuel-library.pkgs.ubuntu.smoke_neutron
> 8.0.verify-docker-fuel-web-ui
> 8.0.verify-fuel-web
> 8.0.verify-fuel-web-ui
> fuellib_noop_tests
> master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> master.fuel-astute.pkgs.ubuntu.review_astute_patched
> master.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> master.fuel-library.pkgs.ubuntu.smoke_neutron
> master.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> master.python-fuelclient.pkgs.ubuntu.review_fuel_client
> mitaka.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> mitaka.fuel-astute.pkgs.ubuntu.review_astute_patched
> mitaka.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> mitaka.fuel-library.pkgs.ubuntu.smoke_neutron
> mitaka.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> mitaka.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> mitaka.python-fuelclient.pkgs.ubuntu.review_fuel_client
> old.verify-nailgun_performance_tests
> verify-fuel-astute
> verify-fuel-devops
> verify-fuel-docs
> verify-fuel-library-bats-tests
> verify-fuel-library-puppetfile
> verify-fuel-library-python
> verify-fuel-library-tasks
> verify-fuel-nailgun-agent
> verify-fuel-plugins
> verify-fuel-qa-docs
> verify-fuel-stats
> verify-fuel-ui-on-fuel-web
> verify-fuel-web-docs
> verify-fuel-web-on-fuel-ui
> verify-nailgun_performance_tests
> verify-puppet-modules.lint
> verify-puppet-modules.syntax
> verify-puppet-modules.unit
> verify-python-fuelclient
> verify-python-fuelclient-on-fuel-web
> verify-sandbox
>
>
> [0] https://review.fuel-infra.org/#/c/17916/
> [1] https://ci.fuel-infra.org/
> [2] without quotes
> --
> Kind Regards,
> Dmitry Kaigarodtsev
> Mirantis, Inc.
>
> +38 (093) 522-09-79 (mobile)
> +38 (057) 728-4214 (office)
> Skype: d1mas85
>
> 38, Lenin avenue
> Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
> dkaiharod...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] snapshot tool

2016-04-21 Thread Dmitry Sutyagin
Team,

A "bicycle" will have to be present anyway, as a code which interacts with
Ansible, because as far as I understand Ansible on it's own cannot provide
all the functionality in one go, so a wrapper for it will have to be
present anyway.

I think me and Alexander we will look into converting Timmy into
Ansible-based tool. One way to go would be to make Ansible a backend option
for Timmy (ssh being the alternative).

I agree that the folder-driven structure is not easy to manipulate, but you
don't want to put all your scripts inside Ansible playbooks, that would
also be a mess. Something in-between would work well - folder structure for
available
scripts, and playbooks which link to them via -script: ,
generated statically (default) or dynamically if need be.

Also, I imagine some functions might not be directly possible with Ansible,
such as parallel stdout delivery of binary data into separate files (Timmy
pulls logs compressed on the fly on the node side through ssh, to avoid
using any unnecessary disk space on env nodes and local machine). So again,
for maximum efficiency and specifc tasks a separate tool might be required,
apart of Ansible.



On Wed, Apr 20, 2016 at 5:36 PM, Dmitriy Novakovskiy <
dnovakovs...@mirantis.com> wrote:

> There's a thread on openstack-dev, but
> - nobody replied there (I checked this morning)
> - I can't link PROD tickets there :)
>
>
> On Thursday, April 21, 2016, Mike Scherbakov 
> wrote:
>
>> Guys,
>> how did it turn into openstack-dev from mos-dev, without any tags and
>> original messages... ?
>>
>> Please be careful when replying... There is a different email thread
>> started in OpenStack dev, with [Fuel] in subject..
>>
>> On Wed, Apr 20, 2016 at 10:08 AM Dmitry Nikishov 
>> wrote:
>>
>>> Dmitry,
>>>
>>> I mean, currently shotgun fetches services' configuration along with
>>> astute.yaml. These files contain passwords, keys, tokens. I beleive, these
>>> should be sanitized. Or, better yet, there should be an option to sanitize
>>> sensitive data from fetched files.
>>>
>>>
>>> Aleksandr,
>>>
>>> Currently Fuel has a service non-root account with passwordless sudo
>>> enabled. This may change in the future (the passwordless part), however,
>>> now I don't see an issue there.
>>> Additionally, it is possible for users to configure sudo for the
>>> user-facing account however they like.
>>>
>>> In regards to have this tool to use a non-root accounts, there are 2
>>> items:
>>> - execute commands, that require elevated privileges (the easy part --
>>> user has to be able to execute these commands with sudo and without
>>> password)
>>> - copy files, that this user doesn't have read privileges for.
>>>
>>> For the second item, there are 2 possible solutions:
>>> 1. Give the non-root user read privileges for these files.
>>> Pros:
>>> - More straightforward, generally acceptable way
>>> Cons:
>>> - Requires additional implementation to give permissions to the user
>>> - (?) Not very extensible: to allow copying a new file, we'd have to
>>> first add it to the tool's config, and somehow implement adding read
>>> permissions
>>>
>>> 2. Somehow allow to copy these files with sudo.
>>> Pros:
>>> - More simple implementation: we'll just need to make sure that the user
>>> can do passwordless sudo
>>> - Extensible: to add more files, it's enough to just specify them in the
>>> tool's configuration.
>>> Cons:
>>> - Non-obvious, obscure way
>>> - Relies on having to be able to do something like "sudo cat
>>> /path/to/file", which is not much better that just giving the user read
>>> privileges. In fact, the only difference between this and giving the user
>>> the read rights is that it is possible to allow "sudo cat" for files, that
>>> don't yet exist, whereas giving permissions requires that these files
>>> already are on the filesystem.
>>>
>>> What way do you think is more appropriate?
>>>
>>>
>>> On Wed, Apr 20, 2016 at 5:28 AM, Aleksandr Dobdin 
>>> wrote:
>>>
 Dmitry,

 You can create a non-root user account without root privileges but you
 need to add it to appropriate groups and configure sudo permissions (even
 though you add this user to root group, it will fail with iptables command
 for example) to get config files and launch requested commands.I
 suppose that it is possible to note this possibility in the documentation
 and provide a customer with detailed instructions on how to setup this user
 account.There are some logs that will also be missing from the
 snapshot with the message permission denied (only the root user has
 access to some files with 0600 mask)
 This user account could be specified into config.yaml (ssh -> opts
 option)

 Sincerely yours,
 Aleksandr Dobdin
 Senior Operations Engineer
 Mirantis
 ​Inc.​



 __

Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-21 Thread Vitaly Kramskikh
Folks,

I'd like to request workroom sessions swap.

I planned to lead a discussion of Fuel UI modularization on Wed
11.00-11.40, but at the same time there will be discussion of handling JS
dependencies of Horizon which I'd really like to attend.

So I request to swap my discussion with discussion of finalizing of HA
reference architecture with event-based control and fencing led by V.
Kuklin on Thu 11.00-11.40.

Do you have any objections?

2016-04-14 17:55 GMT+03:00 Alexey Shtokolov :

> Hi, +1 from my side.
>
> ---
> WBR, Alexey Shtokolov
>
> 2016-04-14 16:47 GMT+03:00 Evgeniy L :
>
>> Hi, no problem from my side.
>>
>> On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> I'd like to request workrooms sessions swap.
>>>
>>> We have a session about Fuel/Ironic integration and I'd like
>>> this session not to overlap with Ironic sessions, so Ironic
>>> team could attend Fuel sessions. At the same time, we have
>>> a session about orchestration engine and it would be great to
>>> invite there people from Mistral and Heat.
>>>
>>> My suggestion is as follows:
>>>
>>> Wed:
>>> 9:50 Astute -> Mistral/Heat/???
>>> Thu:
>>> 9.00 Fuel/Ironic/Ironic-inspector
>>>
>>> If there are any objections, please let me know asap.
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 Looks like we have final version sessions layout [1]
 for Austin design summit. We have 3 fishbows,
 11 workrooms, full day meetup.

 Here you can find some useful information about design
 summit [2]. All session leads must read this page,
 be prepared for their sessions (agenda, slides if needed,
 etherpads for collaborative work, etc.) and follow
 the recommendations given in "At the Design Summit" section.

 Here is Fuel session planning etherpad [3]. Almost all suggested
 topics have been put there. Please put links to slide decks
 and etherpads next to respective sessions. Here is the
 page [4] where other teams publish their planning pads.

 If session leads want for some reason to swap their slots it must
 be requested in this ML thread. If for some reason session lead
 can not lead his/her session, it must be announced in this ML thread.

 Fuel sessions are:
 ===
 Fishbowls:
 ===
 Wed:
 15:30-16:10
 16:30:17:10
 17:20-18:00

 ===
 Workrooms:
 ===
 Wed:
 9:00-9:40
 9:50-10:30
 11:00-11:40
 11:50-12:30
 13:50-14:30
 14:40-15:20
 Thu:
 9:00-9:40
 9:50-10:30
 11:00-11:40
 11:50-12:30
 13:30-14:10

 ===
 Meetup:
 ===
 Fri:
 9:00-12:30
 14:00-17:30

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
 [2] https://wiki.openstack.org/wiki/Design_Summit
 [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
 [4] https://wiki.openstack.org/wiki/Design_Summit/Planning

 Thanks.

 Vladimir Kozhukalov

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] One plugin - one Launchpad project

2016-04-21 Thread Neil Jerram
On 19/04/16 16:52, Irina Povolotskaya wrote:
> Hi to everyone,
>
> as you possibly know (at least, those dev. teams working on their Fuel
> plugins) we have a fuel-plugins Launchpad project [1] which serves as
> all-in-one entry point for filing bugs, related
> to plugin-specific problems.
>
> nevertheless, this single project is a bad idea in terms of providing
> granularity and visibility for each plugin:
> - it's not possible to make up milestones, unique for every plugin that
> would coincide with the plugin's version (which is specified in
> metadata.yaml file)
> - it's not possible to provide every dev. team with exclusive rights on
> managing importance, milestones etc.
>
> therefore, I would like to propose the following:
> - if you have your own fuel plugin, create a separate LP project for it
> e.g.[2] [3]and make up all corresponding groups for managing release
> cycle of your plugin
> - if you have some issues with fuel plugin framework itself, please
> consider filing bugs in fuel project [4] as usual.
>
> I would appreciate getting feedback on this idea.
> if it seems fine, then I'll follow-up with adding instructions into our
> SDK [5] and the list of already existing LP projects.

I agree that it is better to have a project for each plugin.  For the 
Calico plugin, we actually already have this [1].

Thanks,
Neil

[1] https://launchpad.net/fuel-plugin-calico


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] snapshot tool

2016-04-20 Thread Dmitry Nikishov
Dmitry,

I mean, currently shotgun fetches services' configuration along with
astute.yaml. These files contain passwords, keys, tokens. I beleive, these
should be sanitized. Or, better yet, there should be an option to sanitize
sensitive data from fetched files.


Aleksandr,

Currently Fuel has a service non-root account with passwordless sudo
enabled. This may change in the future (the passwordless part), however,
now I don't see an issue there.
Additionally, it is possible for users to configure sudo for the
user-facing account however they like.

In regards to have this tool to use a non-root accounts, there are 2 items:
- execute commands, that require elevated privileges (the easy part -- user
has to be able to execute these commands with sudo and without password)
- copy files, that this user doesn't have read privileges for.

For the second item, there are 2 possible solutions:
1. Give the non-root user read privileges for these files.
Pros:
- More straightforward, generally acceptable way
Cons:
- Requires additional implementation to give permissions to the user
- (?) Not very extensible: to allow copying a new file, we'd have to first
add it to the tool's config, and somehow implement adding read permissions

2. Somehow allow to copy these files with sudo.
Pros:
- More simple implementation: we'll just need to make sure that the user
can do passwordless sudo
- Extensible: to add more files, it's enough to just specify them in the
tool's configuration.
Cons:
- Non-obvious, obscure way
- Relies on having to be able to do something like "sudo cat
/path/to/file", which is not much better that just giving the user read
privileges. In fact, the only difference between this and giving the user
the read rights is that it is possible to allow "sudo cat" for files, that
don't yet exist, whereas giving permissions requires that these files
already are on the filesystem.

What way do you think is more appropriate?


On Wed, Apr 20, 2016 at 5:28 AM, Aleksandr Dobdin 
wrote:

> Dmitry,
>
> You can create a non-root user account without root privileges but you
> need to add it to appropriate groups and configure sudo permissions (even
> though you add this user to root group, it will fail with iptables command
> for example) to get config files and launch requested commands.I suppose
> that it is possible to note this possibility in the documentation and
> provide a customer with detailed instructions on how to setup this user
> account.There are some logs that will also be missing from the snapshot
> with the message permission denied (only the root user has access to some
> files with 0600 mask)
> This user account could be specified into config.yaml (ssh -> opts option)
>
> Sincerely yours,
> Aleksandr Dobdin
> Senior Operations Engineer
> Mirantis
> ​Inc.​
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dmitry Nikishov,
Deployment Engineer,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] VIP addresses and network templates

2016-04-20 Thread Simon Pasquier
Many thanks Alexey! That's exactly the information I needed.
Simon

On Wed, Apr 20, 2016 at 1:19 PM, Aleksey Kasatkin 
wrote:

> Hi Simon,
>
> When network template is in use, network roles to endpoints mapping is
> specified in section "roles" (in the template). So, "default_mapping"
> from network role description is overridden in the network template.
> E.g.:
>
> network_assignments:
> monitoring:
> ep: br-mon
> ...
>
> network_scheme:
> custom:
> roles:
> influxdb_vip: br-mon
> ...
> ...
>
>
> I hope, this helps.
>
> Regards,
>
>
>
> Aleksey Kasatkin
>
>
> On Wed, Apr 20, 2016 at 12:16 PM, Simon Pasquier 
> wrote:
>
>> Hi,
>> I've got a question regarding network templates and VIP. Some of our
>> users want to run the StackLight services (eg Elasticsearch/Kibana and
>> InfluxDB/Grafana servers) on a dedicated network (lets call it
>> 'monitoring'). People use network templates [0] to provision this
>> additional network but how can Nailgun allocate the VIP address(es) from
>> this 'monitoring' network knowing that today the plugins specify the
>> 'management' network [1][2]?
>> Thanks for your help,
>> Simon
>> [0]
>> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
>> [1]
>> https://github.com/openstack/fuel-plugin-influxdb-grafana/blob/8976c4869ea5ec464e5d19b387c1a7309bed33f4/network_roles.yaml#L4
>> [2]
>> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/25b79aff9a79d106fc74b33535952d28b0093afb/network_roles.yaml#L2
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] VIP addresses and network templates

2016-04-20 Thread Aleksey Kasatkin
Hi Simon,

When network template is in use, network roles to endpoints mapping is
specified in section "roles" (in the template). So, "default_mapping" from
network role description is overridden in the network template.
E.g.:

network_assignments:
monitoring:
ep: br-mon
...

network_scheme:
custom:
roles:
influxdb_vip: br-mon
...
...


I hope, this helps.

Regards,



Aleksey Kasatkin


On Wed, Apr 20, 2016 at 12:16 PM, Simon Pasquier 
wrote:

> Hi,
> I've got a question regarding network templates and VIP. Some of our users
> want to run the StackLight services (eg Elasticsearch/Kibana and
> InfluxDB/Grafana servers) on a dedicated network (lets call it
> 'monitoring'). People use network templates [0] to provision this
> additional network but how can Nailgun allocate the VIP address(es) from
> this 'monitoring' network knowing that today the plugins specify the
> 'management' network [1][2]?
> Thanks for your help,
> Simon
> [0]
> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
> [1]
> https://github.com/openstack/fuel-plugin-influxdb-grafana/blob/8976c4869ea5ec464e5d19b387c1a7309bed33f4/network_roles.yaml#L4
> [2]
> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/25b79aff9a79d106fc74b33535952d28b0093afb/network_roles.yaml#L2
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] snapshot tool

2016-04-20 Thread Aleksandr Dobdin
Dmitry,

You can create a non-root user account without root privileges but you need
to add it to appropriate groups and configure sudo permissions (even though
you add this user to root group, it will fail with iptables command for
example) to get config files and launch requested commands.I suppose that
it is possible to note this possibility in the documentation and provide a
customer with detailed instructions on how to setup this user account.There
are some logs that will also be missing from the snapshot with the
message permission
denied (only the root user has access to some files with 0600 mask)
This user account could be specified into config.yaml (ssh -> opts option)

Sincerely yours,
Aleksandr Dobdin
Senior Operations Engineer
Mirantis
​Inc.​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] snapshot tool

2016-04-19 Thread Dmitry Sutyagin
IMHO, removal of sensitive information is done by services when they (do
not) log relative data to logs, such as tokens. Current set of commands
only collects specific config folders and files and logs, but if an admin
decided to store keys in one of these folders - the tool will collect them
too. It's up to the end user to only provide data collected via this tool
to a trusted party. Same goes for our current snapshot mechanism.
As of sanitization for hostnames, IPs, etc - this will make the diagnostic
snapshot pretty useless because it's important for navigation within logs
and configs, for RCA compilation, etc.

I cannot say much about running under a non-root account, I guess that
would be pretty easy to implement, let's wait for Alexander's reply. I am
not sure it is useful though because a non-root user will not have
necessary access unless there is a passwordless non-interactive sudo config.

On Tue, Apr 19, 2016 at 1:39 PM, Dmitry Nikishov 
wrote:

> Hello,
>
> I've got a couple of questions:
> - What about this tool using non-root accounts to connect to OpenStack
> nodes? Currently, it seems to assume that it always is going to use "root"
> for SSH.
> - Shouldn't it sanitize all sensitive information (user names, host names,
> passwords, tokens, keys etc)?
>
> Thanks.
>
> On Tue, Apr 19, 2016 at 4:52 AM, Aleksandr Dobdin 
> wrote:
>
>> Hello team,
>>
>> I want to discuss the tool  that we
>> have created for MOS as a replacement/alternative of shotgun.
>>
>>
>>
>>-
>>
>>The tool is based on
>>https://etherpad.openstack.org/p/openstack-diagnostics
>>-
>>
>>Should work fine on the following environments that were tested: 4.x,
>>5.x, 6.x, 7.0, 8.0
>>-
>>
>>Operates non-destructively.
>>-
>>
>>Can be launched on any host within admin network, provided the fuel
>>node IP is specified and access is possible to Fuel and other nodes via 
>> ssh
>>from local system.
>>-
>>
>>Parallel launch, only on the nodes that are 'online'. Some filters
>>for nodes are also available.
>>-
>>
>>Commands (from ./cmds directory) are separated according to roles
>>(detected automatically) by the symlinks. Thus, the command list may 
>> depend
>>on release, roles and OS. In addition, there can be some commands that run
>>everywhere. There are also commands that are executed only on one node
>>according to its role, using the first node of this type they
>>encounter.
>>-
>>
>>Modular: possible to create a special package that contains only
>>certain required commands.
>>-
>>
>>Collects log files from the nodes using filters
>>-
>>
>>Some archives are created - general.tar.bz2 and logs-*
>>-
>>
>>checks are implemented to prevent filesystem filling due to log
>>collection, appropriate error shown.
>>-
>>
>>can be imported in other python scripts (ex.
>>https://github.com/f3flight/timmy-customtest) and used as a transport
>>and structure to access node parameters known to Fuel, run commands on
>>nodes, collect outputs, etc. with ease.
>>
>> ​
>>
>> Sincerely yours,
>> Aleksandr Dobdin
>> Senior Operations Engineer
>> Mirantis
>> ​Inc.​
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dmitry Nikishov,
> Deployment Engineer,
> Mirantis, Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours sincerely,
Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] snapshot tool

2016-04-19 Thread Dmitry Nikishov
Hello,

I've got a couple of questions:
- What about this tool using non-root accounts to connect to OpenStack
nodes? Currently, it seems to assume that it always is going to use "root"
for SSH.
- Shouldn't it sanitize all sensitive information (user names, host names,
passwords, tokens, keys etc)?

Thanks.

On Tue, Apr 19, 2016 at 4:52 AM, Aleksandr Dobdin 
wrote:

> Hello team,
>
> I want to discuss the tool  that we
> have created for MOS as a replacement/alternative of shotgun.
>
>
>
>-
>
>The tool is based on
>https://etherpad.openstack.org/p/openstack-diagnostics
>-
>
>Should work fine on the following environments that were tested: 4.x,
>5.x, 6.x, 7.0, 8.0
>-
>
>Operates non-destructively.
>-
>
>Can be launched on any host within admin network, provided the fuel
>node IP is specified and access is possible to Fuel and other nodes via ssh
>from local system.
>-
>
>Parallel launch, only on the nodes that are 'online'. Some filters for
>nodes are also available.
>-
>
>Commands (from ./cmds directory) are separated according to roles
>(detected automatically) by the symlinks. Thus, the command list may depend
>on release, roles and OS. In addition, there can be some commands that run
>everywhere. There are also commands that are executed only on one node
>according to its role, using the first node of this type they
>encounter.
>-
>
>Modular: possible to create a special package that contains only
>certain required commands.
>-
>
>Collects log files from the nodes using filters
>-
>
>Some archives are created - general.tar.bz2 and logs-*
>-
>
>checks are implemented to prevent filesystem filling due to log
>collection, appropriate error shown.
>-
>
>can be imported in other python scripts (ex.
>https://github.com/f3flight/timmy-customtest) and used as a transport
>and structure to access node parameters known to Fuel, run commands on
>nodes, collect outputs, etc. with ease.
>
> ​
>
> Sincerely yours,
> Aleksandr Dobdin
> Senior Operations Engineer
> Mirantis
> ​Inc.​
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dmitry Nikishov,
Deployment Engineer,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-04-19 Thread Evgeniy L
Hi,

On upcoming summit we will have a track on Fuel & Ironic (Ironic-inspector)
integration, so you are welcome to participate.

Thursday 9:00
https://etherpad.openstack.org/p/fuel-newton-summit-planning

Thanks,

On Fri, Mar 18, 2016 at 9:49 PM, Jim Rollenhagen 
wrote:

> On Fri, Mar 18, 2016 at 07:26:03PM +0300, Vladimir Kozhukalov wrote:
> > >Well, there's a number of reasons. Ironic is not meant only for an
> > >"undercloud" (deploying OpenStack on ironic instances). There are both
> > >public and private cloud deployments of ironic in production today, that
> > >make bare metal instances available to users of the cloud. Those users
> > >may not want an agent running inside their instance, and more
> > >importantly, the operators of those clouds may not want to expose the
> > >ironic or inspector APIs to their users.
> >
> > >I'm not sure ironic should say "no, that isn't allowed" but at a minimum
> > >it would need to be opt-in behavior.
> >
> > For me it's absolutely clear why cloud case does assume running any kind
> of
> > agent
> > inside user instance. It is clear why cloud case does not assume exposing
> > API
> > to the user instance. But cloud is not the only case that exists.
> > Fuel is a deployment tool. Fuel case is not cloud.  It is 'cattle'
> (cattle
> > vs. pets), but
> > it is not cloud in a sense that instances are 'user instances'.
> > Fuel 'user instances' are not even 'user' instances.
> > Fuel manages the content of instances throughout their whole life cycle.
>
> To be clear, I'm not saying we shouldn't do it. I'm saying we should
> talk about it. Ironic can't assume there's an agent, but we sure could
> make it optional. I do realize Fuel is a valid use case, and I want to
> be able to support that use case.
>
> // jim
>
> > As you might remember we talked about this about two years ago (when we
> > tried to contribute lvm and md features to IPA). I don't know why this
> case
> > (deployment) was rejected again and again while it's still viable and
> > widely used.
> > And I don't know why it could not be implemented to be 'opt-in'.
> > Since that we have invented our own fuel-agent (that supports lvm, md)
> and
> > a driver for Ironic conductor that allows to use Ironic with fuel-agent.
> >
> > >Is the fuel team having a summit session of some sort about integrating
> > >with ironic better? I'd be happy to come to that if it can be scheduled
> > >at a time that ironic doesn't have a session. Otherwise maybe we can
> > >catch up on Friday or something.
> >
> > >I'm glad to see Fuel wanting to integrate better with Ironic.
> >
> > We are still quite interested in closer integration with Ironic (we need
> > power
> > management features that Ironic provides). We'll be happy to schedule yet
> > another discussion on closer integration with Ironic.
> >
> > BTW, about a year ago (in Grenoble) we agreed that it is not even
> > necessary to merge such custom things into Ironic tree. Happily, Ironic
> is
> > smart enough to consume drivers using stevedore. About ironic-inspector
> > the case is the same. Whether we are going to run it inside 'user
> instance'
> > or inside ramdisk it does not affect ironic-inspector itself. If Ironic
> > team is
> > open for merging "non-cloud" features (of course 'opt-in') we'll be happy
> > to contribute.
> >
> > Vladimir Kozhukalov
> >
> > On Fri, Mar 18, 2016 at 6:03 PM, Jim Rollenhagen  >
> > wrote:
> >
> > > On Fri, Mar 18, 2016 at 05:26:13PM +0300, Evgeniy L wrote:
> > > > On Thu, Mar 17, 2016 at 3:16 PM, Dmitry Tantsur  >
> > > wrote:
> > > >
> > > > > On 03/16/2016 01:39 PM, Evgeniy L wrote:
> > > > >
> > > > >> Hi Dmitry,
> > > > >>
> > > > >> I can try to provide you description on what current Nailgun
> agent is,
> > > > >> and what are potential requirements we may need from HW discovery
> > > system.
> > > > >>
> > > > >> Nailgun agent is a one-file Ruby script [0] which is periodically
> run
> > > > >> under cron. It collects information about HW using ohai [1], plus
> it
> > > > >> does custom parsing, filtration, retrieval of HW information.
> After
> > > the
> > > > >> information is collected, it is sent to Nailgun, that is how node
> gets
> > > > >> discovered in Fuel.
> > > > >>
> > > > >
> > > > > Quick clarification: does it run on user instances? or does it run
> on
> > > > > hardware while it's still not deployed to? The former is something
> that
> > > > > Ironic tries not to do. There is an interest in the latter.
> > > >
> > > >
> > > > Both, on user instances (with deployed OpenStack) and on instances
> which
> > > > are not deployed and in bootstrap.
> > > > What are the reasons Ironic tries not to do that (running HW
> discovery on
> > > > deployed node)?
> > >
> > > Well, there's a number of reasons. Ironic is not meant only for an
> > > "undercloud" (deploying OpenStack on ironic instances). There are both
> > > public and private cloud 

Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-04-18 Thread Evgeniy L
>> Btw, one of the ideas was to use Fuel task capabilities to gather
diagnostic snapshot.

I think such kind of tools should use as less as possible existing
infrastructure, because in case if something went wrong, you should be able
to easily get diagnostic information, even with broken RabbitMQ, Astute and
MCollective.

Thanks,


On Mon, Apr 18, 2016 at 2:26 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Colleagues,
>
> Whether we are going to continue using Shotgun or
> substitute it with something else, we still need to
> decouple it from Fuel because Shotgun is a generic
> tool. Please review these [1], [2].
>
> [1] https://review.openstack.org/#/c/298603
> [2] https://review.openstack.org/#/c/298615
>
>
> Btw, one of the ideas was to use Fuel task capabilities
> to gather diagnostic snapshot.
>
> Vladimir Kozhukalov
>
> On Thu, Mar 31, 2016 at 1:32 PM, Evgeniy L  wrote:
>
>> Hi,
>>
>> Problems which I see with current Shotgun are:
>> 1. Luck of parallelism, so it's not going to fetch data fast enough from
>> medium/big clouds.
>> 2. There should be an easy way to run it manually (it's possible, but
>> there is no ready-to-use config), it would be really helpful in case if
>> Nailgun/Astute/MCollective are down.
>>
>> As far as I know 1st is partly covered by Ansible, but the problem is it
>> executes a single task in parallel, so there is probability that lagging
>> node will slow down fetching from entire environment.
>> Also we will have to build a tool around Ansible to generate playbooks.
>>
>> Thanks,
>>
>> On Wed, Mar 30, 2016 at 5:18 PM, Tomasz 'Zen' Napierala <
>> tnapier...@mirantis.com> wrote:
>>
>>> Hi,
>>>
>>> Do we have any requirements for the new tool? Do we know what we don’t
>>> like about current implementation, what should be avoided, etc.? Before
>>> that we can only speculate.
>>> From my ops experience, shotgun like tools will not work conveniently on
>>> medium to big environments. Even on medium env amount of logs is just too
>>> huge to handle by such simple tool. In such environments better pattern is
>>> to use dedicated log collection / analysis tool, just like StackLight.
>>> At the other hand I’m not sure if ansible is the right tool for that. It
>>> has some features (like ‘fetch’ command) but in general it’s a
>>> configuration management tool, and I’m not sure how it would act under such
>>> heavy load.
>>>
>>> Regards,
>>>
>>> > On 30 Mar 2016, at 15:20, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>> >
>>> > ​Igor,
>>> >
>>> > I can not agree more. Wherever possible we should
>>> > use existent mature solutions. Ansible is really
>>> > convenient and well known solution, let's try to
>>> > use it.
>>> >
>>> > Yet another thing should be taken into account.
>>> > One of Shotgun features is diagnostic report
>>> > that could then be attached to bugs to identify
>>> > the content of env. This report could also be
>>> > used to reproduce env and then fight a bug.
>>> > I'd like we to have this kind of report.
>>> > Is it possible to implement such a feature
>>> > using Ansible? If yes, then let's switch to Ansible
>>> > as soon as possible.
>>> >
>>> > ​
>>> >
>>> > Vladimir Kozhukalov
>>> >
>>> > On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky <
>>> ikalnit...@mirantis.com> wrote:
>>> > Neil Jerram wrote:
>>> > > But isn't Ansible also over-complicated for just running commands
>>> over SSH?
>>> >
>>> > It may be not so "simple" to ignore that. Ansible has a lot of modules
>>> > which might be very helpful. For instance, Shotgun makes a database
>>> > dump and there're Ansible modules with the same functionality [1].
>>> >
>>> > Don't think I advocate Ansible as a replacement. My point is, let's
>>> > think about reusing ready solutions. :)
>>> >
>>> > - igor
>>> >
>>> >
>>> > [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
>>> >
>>> > On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram <
>>> neil.jer...@metaswitch.com> wrote:
>>> > >
>>> > > FWIW, as a naive bystander:
>>> > >
>>> > > On 30/03/16 11:06, Igor Kalnitsky wrote:
>>> > >> Hey Fuelers,
>>> > >>
>>> > >> I know that you probably wouldn't like to hear that, but in my
>>> opinion
>>> > >> Fuel has to stop using Shotgun. It's nothing more but a command
>>> runner
>>> > >> over SSH. Besides, it has well known issues such as retrieving
>>> remote
>>> > >> directories with broken symlinks inside.
>>> > >
>>> > > It makes sense to me that a command runner over SSH might not need
>>> to be
>>> > > a whole Fuel-specific component.
>>> > >
>>> > >> So I propose to find a modern alternative and reuse it. If we stop
>>> > >> supporting Shotgun, we can spend extra time to focus on more
>>> important
>>> > >> things.
>>> > >>
>>> > >> As an example, we can consider to use Ansible. It should not be
>>> tricky
>>> > >> to generate Ansible playbook instead of generating Shotgun one.
>>> > >> Ansible is a  well known tool for devops and cloud operators, and
>>> 

Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-04-18 Thread Vladimir Kozhukalov
Colleagues,

Whether we are going to continue using Shotgun or
substitute it with something else, we still need to
decouple it from Fuel because Shotgun is a generic
tool. Please review these [1], [2].

[1] https://review.openstack.org/#/c/298603
[2] https://review.openstack.org/#/c/298615


Btw, one of the ideas was to use Fuel task capabilities
to gather diagnostic snapshot.

Vladimir Kozhukalov

On Thu, Mar 31, 2016 at 1:32 PM, Evgeniy L  wrote:

> Hi,
>
> Problems which I see with current Shotgun are:
> 1. Luck of parallelism, so it's not going to fetch data fast enough from
> medium/big clouds.
> 2. There should be an easy way to run it manually (it's possible, but
> there is no ready-to-use config), it would be really helpful in case if
> Nailgun/Astute/MCollective are down.
>
> As far as I know 1st is partly covered by Ansible, but the problem is it
> executes a single task in parallel, so there is probability that lagging
> node will slow down fetching from entire environment.
> Also we will have to build a tool around Ansible to generate playbooks.
>
> Thanks,
>
> On Wed, Mar 30, 2016 at 5:18 PM, Tomasz 'Zen' Napierala <
> tnapier...@mirantis.com> wrote:
>
>> Hi,
>>
>> Do we have any requirements for the new tool? Do we know what we don’t
>> like about current implementation, what should be avoided, etc.? Before
>> that we can only speculate.
>> From my ops experience, shotgun like tools will not work conveniently on
>> medium to big environments. Even on medium env amount of logs is just too
>> huge to handle by such simple tool. In such environments better pattern is
>> to use dedicated log collection / analysis tool, just like StackLight.
>> At the other hand I’m not sure if ansible is the right tool for that. It
>> has some features (like ‘fetch’ command) but in general it’s a
>> configuration management tool, and I’m not sure how it would act under such
>> heavy load.
>>
>> Regards,
>>
>> > On 30 Mar 2016, at 15:20, Vladimir Kozhukalov 
>> wrote:
>> >
>> > ​Igor,
>> >
>> > I can not agree more. Wherever possible we should
>> > use existent mature solutions. Ansible is really
>> > convenient and well known solution, let's try to
>> > use it.
>> >
>> > Yet another thing should be taken into account.
>> > One of Shotgun features is diagnostic report
>> > that could then be attached to bugs to identify
>> > the content of env. This report could also be
>> > used to reproduce env and then fight a bug.
>> > I'd like we to have this kind of report.
>> > Is it possible to implement such a feature
>> > using Ansible? If yes, then let's switch to Ansible
>> > as soon as possible.
>> >
>> > ​
>> >
>> > Vladimir Kozhukalov
>> >
>> > On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com> wrote:
>> > Neil Jerram wrote:
>> > > But isn't Ansible also over-complicated for just running commands
>> over SSH?
>> >
>> > It may be not so "simple" to ignore that. Ansible has a lot of modules
>> > which might be very helpful. For instance, Shotgun makes a database
>> > dump and there're Ansible modules with the same functionality [1].
>> >
>> > Don't think I advocate Ansible as a replacement. My point is, let's
>> > think about reusing ready solutions. :)
>> >
>> > - igor
>> >
>> >
>> > [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
>> >
>> > On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram <
>> neil.jer...@metaswitch.com> wrote:
>> > >
>> > > FWIW, as a naive bystander:
>> > >
>> > > On 30/03/16 11:06, Igor Kalnitsky wrote:
>> > >> Hey Fuelers,
>> > >>
>> > >> I know that you probably wouldn't like to hear that, but in my
>> opinion
>> > >> Fuel has to stop using Shotgun. It's nothing more but a command
>> runner
>> > >> over SSH. Besides, it has well known issues such as retrieving remote
>> > >> directories with broken symlinks inside.
>> > >
>> > > It makes sense to me that a command runner over SSH might not need to
>> be
>> > > a whole Fuel-specific component.
>> > >
>> > >> So I propose to find a modern alternative and reuse it. If we stop
>> > >> supporting Shotgun, we can spend extra time to focus on more
>> important
>> > >> things.
>> > >>
>> > >> As an example, we can consider to use Ansible. It should not be
>> tricky
>> > >> to generate Ansible playbook instead of generating Shotgun one.
>> > >> Ansible is a  well known tool for devops and cloud operators, and
>> they
>> > >> we will only benefit if we provide possibility to extend diagnostic
>> > >> recipes in usual (for them) way. What do you think?
>> > >
>> > > But isn't Ansible also over-complicated for just running commands
>> over SSH?
>> > >
>> > > Neil
>> > >
>> > >
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > 

Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-15 Thread Oleg Gelbukh
Jeremy, thank you, that's excellent news. The Infra team is doing awesome
work to improve the processes in all possible ways.

Andreas, I will take a closer look, but it seems to be exactly what I had
in mind. Thanks for sharing!

--
Best regards,
Oleg Gelbukh

On Fri, Apr 15, 2016 at 10:29 AM, Andreas Jaeger  wrote:

> On 04/14/2016 06:30 PM, Jeremy Stanley wrote:
>
>> On 2016-04-14 12:57:38 +0300 (+0300), Oleg Gelbukh wrote:
>>
>>> The thread I'm referring to in the prev message is:
>>>
>>> http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html
>>>
>>
>> At this point it's probably no longer a concern. We don't (and
>> haven't for some time) really support pip versions as old as the
>> ones which predate prerelease identification in their version
>> parsing so could probably just start running the same sdist
>> publication to PyPI for prereleases as we do for full release
>> version tags.
>>
>
> this one merged recently:
> https://review.openstack.org/300124
>
> I think it does what you describe - or is anything else to do?
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-15 Thread Andreas Jaeger

On 04/14/2016 06:30 PM, Jeremy Stanley wrote:

On 2016-04-14 12:57:38 +0300 (+0300), Oleg Gelbukh wrote:

The thread I'm referring to in the prev message is:
http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html


At this point it's probably no longer a concern. We don't (and
haven't for some time) really support pip versions as old as the
ones which predate prerelease identification in their version
parsing so could probably just start running the same sdist
publication to PyPI for prereleases as we do for full release
version tags.


this one merged recently:
https://review.openstack.org/300124

I think it does what you describe - or is anything else to do?

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Jeremy Stanley
On 2016-04-14 12:57:38 +0300 (+0300), Oleg Gelbukh wrote:
> The thread I'm referring to in the prev message is:
> http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html

At this point it's probably no longer a concern. We don't (and
haven't for some time) really support pip versions as old as the
ones which predate prerelease identification in their version
parsing so could probably just start running the same sdist
publication to PyPI for prereleases as we do for full release
version tags.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-14 Thread Alexey Shtokolov
Hi, +1 from my side.

---
WBR, Alexey Shtokolov

2016-04-14 16:47 GMT+03:00 Evgeniy L :

> Hi, no problem from my side.
>
> On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> I'd like to request workrooms sessions swap.
>>
>> We have a session about Fuel/Ironic integration and I'd like
>> this session not to overlap with Ironic sessions, so Ironic
>> team could attend Fuel sessions. At the same time, we have
>> a session about orchestration engine and it would be great to
>> invite there people from Mistral and Heat.
>>
>> My suggestion is as follows:
>>
>> Wed:
>> 9:50 Astute -> Mistral/Heat/???
>> Thu:
>> 9.00 Fuel/Ironic/Ironic-inspector
>>
>> If there are any objections, please let me know asap.
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> Looks like we have final version sessions layout [1]
>>> for Austin design summit. We have 3 fishbows,
>>> 11 workrooms, full day meetup.
>>>
>>> Here you can find some useful information about design
>>> summit [2]. All session leads must read this page,
>>> be prepared for their sessions (agenda, slides if needed,
>>> etherpads for collaborative work, etc.) and follow
>>> the recommendations given in "At the Design Summit" section.
>>>
>>> Here is Fuel session planning etherpad [3]. Almost all suggested
>>> topics have been put there. Please put links to slide decks
>>> and etherpads next to respective sessions. Here is the
>>> page [4] where other teams publish their planning pads.
>>>
>>> If session leads want for some reason to swap their slots it must
>>> be requested in this ML thread. If for some reason session lead
>>> can not lead his/her session, it must be announced in this ML thread.
>>>
>>> Fuel sessions are:
>>> ===
>>> Fishbowls:
>>> ===
>>> Wed:
>>> 15:30-16:10
>>> 16:30:17:10
>>> 17:20-18:00
>>>
>>> ===
>>> Workrooms:
>>> ===
>>> Wed:
>>> 9:00-9:40
>>> 9:50-10:30
>>> 11:00-11:40
>>> 11:50-12:30
>>> 13:50-14:30
>>> 14:40-15:20
>>> Thu:
>>> 9:00-9:40
>>> 9:50-10:30
>>> 11:00-11:40
>>> 11:50-12:30
>>> 13:30-14:10
>>>
>>> ===
>>> Meetup:
>>> ===
>>> Fri:
>>> 9:00-12:30
>>> 14:00-17:30
>>>
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
>>> [2] https://wiki.openstack.org/wiki/Design_Summit
>>> [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
>>> [4] https://wiki.openstack.org/wiki/Design_Summit/Planning
>>>
>>> Thanks.
>>>
>>> Vladimir Kozhukalov
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-14 Thread Evgeniy L
Hi, no problem from my side.

On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I'd like to request workrooms sessions swap.
>
> We have a session about Fuel/Ironic integration and I'd like
> this session not to overlap with Ironic sessions, so Ironic
> team could attend Fuel sessions. At the same time, we have
> a session about orchestration engine and it would be great to
> invite there people from Mistral and Heat.
>
> My suggestion is as follows:
>
> Wed:
> 9:50 Astute -> Mistral/Heat/???
> Thu:
> 9.00 Fuel/Ironic/Ironic-inspector
>
> If there are any objections, please let me know asap.
>
>
>
> Vladimir Kozhukalov
>
> On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> Looks like we have final version sessions layout [1]
>> for Austin design summit. We have 3 fishbows,
>> 11 workrooms, full day meetup.
>>
>> Here you can find some useful information about design
>> summit [2]. All session leads must read this page,
>> be prepared for their sessions (agenda, slides if needed,
>> etherpads for collaborative work, etc.) and follow
>> the recommendations given in "At the Design Summit" section.
>>
>> Here is Fuel session planning etherpad [3]. Almost all suggested
>> topics have been put there. Please put links to slide decks
>> and etherpads next to respective sessions. Here is the
>> page [4] where other teams publish their planning pads.
>>
>> If session leads want for some reason to swap their slots it must
>> be requested in this ML thread. If for some reason session lead
>> can not lead his/her session, it must be announced in this ML thread.
>>
>> Fuel sessions are:
>> ===
>> Fishbowls:
>> ===
>> Wed:
>> 15:30-16:10
>> 16:30:17:10
>> 17:20-18:00
>>
>> ===
>> Workrooms:
>> ===
>> Wed:
>> 9:00-9:40
>> 9:50-10:30
>> 11:00-11:40
>> 11:50-12:30
>> 13:50-14:30
>> 14:40-15:20
>> Thu:
>> 9:00-9:40
>> 9:50-10:30
>> 11:00-11:40
>> 11:50-12:30
>> 13:30-14:10
>>
>> ===
>> Meetup:
>> ===
>> Fri:
>> 9:00-12:30
>> 14:00-17:30
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
>> [2] https://wiki.openstack.org/wiki/Design_Summit
>> [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
>> [4] https://wiki.openstack.org/wiki/Design_Summit/Planning
>>
>> Thanks.
>>
>> Vladimir Kozhukalov
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Oleg Gelbukh
The thread I'm referring to in the prev message is:
http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Thu, Apr 14, 2016 at 12:56 PM, Oleg Gelbukh 
wrote:

> Hi,
>
> I'm sorry for replying to this old thread, but I would really like to see
> this moving.
>
> There's a 'pre-release' pipeline in Zuul which serves exactly that
> purpose: handle pre-release tags (beta-versions). However, per this thread,
> it is not recommended due to possible issues with pip unable to
> differentiate pre-release versions from main releases.
>
> Another option here is to publish minor versions of the package, i.e.
> start with 9.0.0 early, and then increase to 9.0.1 etc once the development
> progresses.
>
> --
> Best regards,
> Oleg Gelbukh
> Mirantis Inc.
>
> On Thu, Jan 21, 2016 at 11:52 AM, Yuriy Taraday 
> wrote:
>
>> By the way, it would be very helpful for testing external tools if we had
>> 7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
>> with a "stable/7.0.1" branch instead of "7.0.1" tag.
>>
>> On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko 
>> wrote:
>>
>>> Releasing a beta version sounds like a good plan but does OpenStack
>>> Infra actually support this?
>>>
>>> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh 
>>> написав(ла):
>>> >
>>> > Hi,
>>> >
>>> > Currently we're experiencing issues with Python dependencies of our
>>> package (fuel-octane), specifically between fuelclient's dependencies and
>>> keystoneclient dependencies.
>>> >
>>> > New keystoneclient is required to work with the new version of Nailgun
>>> due to introduction of SSL in the latter. On the other hand, fuelclient is
>>> released along with the main release of Fuel, and the latest version
>>> available from PyPI is 7.0.0, and it has very old dependencies (based on
>>> packages available in centos6/python26).
>>> >
>>> > The solution I'd like to propose is to release beta version of
>>> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
>>> pip/tox, this will allow to run unittests against the proper set of
>>> requirements. On the other hand, it will not break the users consuming the
>>> latest stable (7.0.0) version with old requirements from PyPI.
>>> >
>>> > Please, share your thoughts and considerations. If no objections, I
>>> will create a corresponding bug/blueprint against fuelclient to be fixed in
>>> the current release cycle.
>>> >
>>> > --
>>> > Best regards,
>>> > Oleg Gelbukh
>>> > Mirantis
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Oleg Gelbukh
Hi,

I'm sorry for replying to this old thread, but I would really like to see
this moving.

There's a 'pre-release' pipeline in Zuul which serves exactly that purpose:
handle pre-release tags (beta-versions). However, per this thread, it is
not recommended due to possible issues with pip unable to differentiate
pre-release versions from main releases.

Another option here is to publish minor versions of the package, i.e. start
with 9.0.0 early, and then increase to 9.0.1 etc once the development
progresses.

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Thu, Jan 21, 2016 at 11:52 AM, Yuriy Taraday  wrote:

> By the way, it would be very helpful for testing external tools if we had
> 7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
> with a "stable/7.0.1" branch instead of "7.0.1" tag.
>
> On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko  wrote:
>
>> Releasing a beta version sounds like a good plan but does OpenStack Infra
>> actually support this?
>>
>> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh 
>> написав(ла):
>> >
>> > Hi,
>> >
>> > Currently we're experiencing issues with Python dependencies of our
>> package (fuel-octane), specifically between fuelclient's dependencies and
>> keystoneclient dependencies.
>> >
>> > New keystoneclient is required to work with the new version of Nailgun
>> due to introduction of SSL in the latter. On the other hand, fuelclient is
>> released along with the main release of Fuel, and the latest version
>> available from PyPI is 7.0.0, and it has very old dependencies (based on
>> packages available in centos6/python26).
>> >
>> > The solution I'd like to propose is to release beta version of
>> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
>> pip/tox, this will allow to run unittests against the proper set of
>> requirements. On the other hand, it will not break the users consuming the
>> latest stable (7.0.0) version with old requirements from PyPI.
>> >
>> > Please, share your thoughts and considerations. If no objections, I
>> will create a corresponding bug/blueprint against fuelclient to be fixed in
>> the current release cycle.
>> >
>> > --
>> > Best regards,
>> > Oleg Gelbukh
>> > Mirantis
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] Update of astute.yaml fixtures and noop tests

2016-04-07 Thread Aleksandr Didenko
Alex, we can do this (and I hope we'll do) after we fix
https://bugs.launchpad.net/fuel/+bug/1567367

Regards,
Alex

On Thu, Apr 7, 2016 at 5:04 PM, Alex Schultz  wrote:

>
> On Thu, Apr 7, 2016 at 7:41 AM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> thanks to Dima, we now have ROLE annotations in noop tests [0]. I've
>> updated all the noop rspec tests that we currently have and added
>> appropriate role annotation [1]. So after this patch is merged, we no
>> longer need to put any new fixtures into dozens of rspec files in order to
>> enable it.
>> Please make sure to update ROLE annotations if you introduce new roles
>> (deployment groups) or change task-to-roles assignments in *tasks.yaml
>> files. Core reviewers, please don't forget to check this as well ;)
>>
>>
> Is there a reason we can't leverage the existing definitions in the
> tasks.yaml files for this?  It seems like requiring people to provide this
> information might lead to cases where it's gets out of sync. Shouldn't we
> use the task files as the source of truth for the roles?
>
> -Alex
>
>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/300649
>> [1] https://review.openstack.org/302313
>>
>> On Tue, Apr 5, 2016 at 12:11 PM, Aleksandr Didenko > > wrote:
>>
>>> Hi folks,
>>>
>>> we've merged all the changes related to fixtures update [0] and bugfix
>>> to unblock noop tests [1]. So if you see -1 from fuel_noop_tests [2] in
>>> tests not related to your patch, then please rebase.
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
>>> [1] https://review.openstack.org/301107
>>> [2] https://ci.fuel-infra.org/job/fuellib_noop_tests/
>>>
>>> On Fri, Apr 1, 2016 at 7:16 PM, Vladimir Kuklin 
>>> wrote:
>>>
 Hi Alex

 +1 to your proposal - this is long-awaited change.

 On Fri, Apr 1, 2016 at 6:01 PM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> One more thing about spec to fixture mapping [0]. What if instead of:
>
> # RUN: (hiera1) (facts1)
>
> we'll use
>
> # RUN: (roles_array1) (facts1)
>
> ?
>
> We don't need to duplicate complicated task graph calculations to
> understand which task to execute, because we don't care about tasks
> ordering and dependencies in noop tests. All we need is to map rspec task
> tests to astute.yaml fixtures. And it could be done via roles.
>
> Regards,
> Alex
>
> [0]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
>
>
> On Fri, Apr 1, 2016 at 4:05 PM, Aleksandr Didenko <
> adide...@mirantis.com> wrote:
>
>> Hi.
>>
>>   As you may know, we're still using some very old astute.yaml
>> fixtures (v6.1) in our 'master' (v9.0) noop rspec tests [0]. Besides 
>> that,
>> we have problems with fixture-to-rspec mapping [1]. So we've started to
>> work on those problems [2].
>>
>>   So please be aware of upcoming changes in noop rspec fixtures and
>> tests. If you see, that some important fixtures are missing (thus not
>> covered by tests) please let me know in this email thread or via
>> IRC/email/slack.
>>
>>   Also, we should stop updating astute.yaml fixtures manually and
>> start using some kind of automation approach instead [3][4]. I propose to
>> use [5] script until we find a better solution. So if you want to add 
>> some
>> new astute.yaml fixture for noop tests, please propose a patch to this
>> script instead of uploading yaml file.
>>
>> Currently the following is missing in the new set of fixtures for
>> fuel-9.0:
>> - generate_vms ('vms_conf' array in astute.yaml - I'm not sure how to
>> properly enable it via nailgun, any help is much appreciated)
>> - selective ssl fixtures - since configuration data is not serialized
>> from nailgun, I think that we should move this into 'hiera/override' 
>> along
>> with implementation of new hiera overrides tests workflow [6]
>> - vmware related fixtures
>>
>> Please feel free to share your ideas/comments on this topic.
>>
>> Thanks,
>> Alex
>>
>> [0] https://bugs.launchpad.net/fuel/+bug/1535339
>> [1]
>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
>> [2] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
>> [3]
>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/fixtures.rst
>> [4]
>> https://blueprints.launchpad.net/fuel/+spec/deployment-dryrun-fixtures-generator
>> [5]
>> https://github.com/openstack/fuel-noop-fixtures/blob/master/utils/generate_yamls.sh
>> [6] https://bugs.launchpad.net/fuel/+bug/1564919
>>
>
>
>
> 

Re: [openstack-dev] [Fuel][library] Update of astute.yaml fixtures and noop tests

2016-04-07 Thread Alex Schultz
On Thu, Apr 7, 2016 at 7:41 AM, Aleksandr Didenko 
wrote:

> Hi,
>
> thanks to Dima, we now have ROLE annotations in noop tests [0]. I've
> updated all the noop rspec tests that we currently have and added
> appropriate role annotation [1]. So after this patch is merged, we no
> longer need to put any new fixtures into dozens of rspec files in order to
> enable it.
> Please make sure to update ROLE annotations if you introduce new roles
> (deployment groups) or change task-to-roles assignments in *tasks.yaml
> files. Core reviewers, please don't forget to check this as well ;)
>
>
Is there a reason we can't leverage the existing definitions in the
tasks.yaml files for this?  It seems like requiring people to provide this
information might lead to cases where it's gets out of sync. Shouldn't we
use the task files as the source of truth for the roles?

-Alex


> Regards,
> Alex
>
> [0] https://review.openstack.org/300649
> [1] https://review.openstack.org/302313
>
> On Tue, Apr 5, 2016 at 12:11 PM, Aleksandr Didenko 
> wrote:
>
>> Hi folks,
>>
>> we've merged all the changes related to fixtures update [0] and bugfix to
>> unblock noop tests [1]. So if you see -1 from fuel_noop_tests [2] in tests
>> not related to your patch, then please rebase.
>>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
>> [1] https://review.openstack.org/301107
>> [2] https://ci.fuel-infra.org/job/fuellib_noop_tests/
>>
>> On Fri, Apr 1, 2016 at 7:16 PM, Vladimir Kuklin 
>> wrote:
>>
>>> Hi Alex
>>>
>>> +1 to your proposal - this is long-awaited change.
>>>
>>> On Fri, Apr 1, 2016 at 6:01 PM, Aleksandr Didenko >> > wrote:
>>>
 One more thing about spec to fixture mapping [0]. What if instead of:

 # RUN: (hiera1) (facts1)

 we'll use

 # RUN: (roles_array1) (facts1)

 ?

 We don't need to duplicate complicated task graph calculations to
 understand which task to execute, because we don't care about tasks
 ordering and dependencies in noop tests. All we need is to map rspec task
 tests to astute.yaml fixtures. And it could be done via roles.

 Regards,
 Alex

 [0]
 https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations


 On Fri, Apr 1, 2016 at 4:05 PM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> Hi.
>
>   As you may know, we're still using some very old astute.yaml
> fixtures (v6.1) in our 'master' (v9.0) noop rspec tests [0]. Besides that,
> we have problems with fixture-to-rspec mapping [1]. So we've started to
> work on those problems [2].
>
>   So please be aware of upcoming changes in noop rspec fixtures and
> tests. If you see, that some important fixtures are missing (thus not
> covered by tests) please let me know in this email thread or via
> IRC/email/slack.
>
>   Also, we should stop updating astute.yaml fixtures manually and
> start using some kind of automation approach instead [3][4]. I propose to
> use [5] script until we find a better solution. So if you want to add some
> new astute.yaml fixture for noop tests, please propose a patch to this
> script instead of uploading yaml file.
>
> Currently the following is missing in the new set of fixtures for
> fuel-9.0:
> - generate_vms ('vms_conf' array in astute.yaml - I'm not sure how to
> properly enable it via nailgun, any help is much appreciated)
> - selective ssl fixtures - since configuration data is not serialized
> from nailgun, I think that we should move this into 'hiera/override' along
> with implementation of new hiera overrides tests workflow [6]
> - vmware related fixtures
>
> Please feel free to share your ideas/comments on this topic.
>
> Thanks,
> Alex
>
> [0] https://bugs.launchpad.net/fuel/+bug/1535339
> [1]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
> [2] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
> [3]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/fixtures.rst
> [4]
> https://blueprints.launchpad.net/fuel/+spec/deployment-dryrun-fixtures-generator
> [5]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/utils/generate_yamls.sh
> [6] https://bugs.launchpad.net/fuel/+bug/1564919
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> 

Re: [openstack-dev] [Fuel][library] Update of astute.yaml fixtures and noop tests

2016-04-07 Thread Aleksandr Didenko
Hi,

thanks to Dima, we now have ROLE annotations in noop tests [0]. I've
updated all the noop rspec tests that we currently have and added
appropriate role annotation [1]. So after this patch is merged, we no
longer need to put any new fixtures into dozens of rspec files in order to
enable it.
Please make sure to update ROLE annotations if you introduce new roles
(deployment groups) or change task-to-roles assignments in *tasks.yaml
files. Core reviewers, please don't forget to check this as well ;)

Regards,
Alex

[0] https://review.openstack.org/300649
[1] https://review.openstack.org/302313

On Tue, Apr 5, 2016 at 12:11 PM, Aleksandr Didenko 
wrote:

> Hi folks,
>
> we've merged all the changes related to fixtures update [0] and bugfix to
> unblock noop tests [1]. So if you see -1 from fuel_noop_tests [2] in tests
> not related to your patch, then please rebase.
>
> Regards,
> Alex
>
> [0] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
> [1] https://review.openstack.org/301107
> [2] https://ci.fuel-infra.org/job/fuellib_noop_tests/
>
> On Fri, Apr 1, 2016 at 7:16 PM, Vladimir Kuklin 
> wrote:
>
>> Hi Alex
>>
>> +1 to your proposal - this is long-awaited change.
>>
>> On Fri, Apr 1, 2016 at 6:01 PM, Aleksandr Didenko 
>> wrote:
>>
>>> One more thing about spec to fixture mapping [0]. What if instead of:
>>>
>>> # RUN: (hiera1) (facts1)
>>>
>>> we'll use
>>>
>>> # RUN: (roles_array1) (facts1)
>>>
>>> ?
>>>
>>> We don't need to duplicate complicated task graph calculations to
>>> understand which task to execute, because we don't care about tasks
>>> ordering and dependencies in noop tests. All we need is to map rspec task
>>> tests to astute.yaml fixtures. And it could be done via roles.
>>>
>>> Regards,
>>> Alex
>>>
>>> [0]
>>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
>>>
>>>
>>> On Fri, Apr 1, 2016 at 4:05 PM, Aleksandr Didenko >> > wrote:
>>>
 Hi.

   As you may know, we're still using some very old astute.yaml fixtures
 (v6.1) in our 'master' (v9.0) noop rspec tests [0]. Besides that, we have
 problems with fixture-to-rspec mapping [1]. So we've started to work on
 those problems [2].

   So please be aware of upcoming changes in noop rspec fixtures and
 tests. If you see, that some important fixtures are missing (thus not
 covered by tests) please let me know in this email thread or via
 IRC/email/slack.

   Also, we should stop updating astute.yaml fixtures manually and start
 using some kind of automation approach instead [3][4]. I propose to use [5]
 script until we find a better solution. So if you want to add some new
 astute.yaml fixture for noop tests, please propose a patch to this script
 instead of uploading yaml file.

 Currently the following is missing in the new set of fixtures for
 fuel-9.0:
 - generate_vms ('vms_conf' array in astute.yaml - I'm not sure how to
 properly enable it via nailgun, any help is much appreciated)
 - selective ssl fixtures - since configuration data is not serialized
 from nailgun, I think that we should move this into 'hiera/override' along
 with implementation of new hiera overrides tests workflow [6]
 - vmware related fixtures

 Please feel free to share your ideas/comments on this topic.

 Thanks,
 Alex

 [0] https://bugs.launchpad.net/fuel/+bug/1535339
 [1]
 https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
 [2] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
 [3]
 https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/fixtures.rst
 [4]
 https://blueprints.launchpad.net/fuel/+spec/deployment-dryrun-fixtures-generator
 [5]
 https://github.com/openstack/fuel-noop-fixtures/blob/master/utils/generate_yamls.sh
 [6] https://bugs.launchpad.net/fuel/+bug/1564919

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com 
>> www.mirantis.ru
>> vkuk...@mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>

Re: [openstack-dev] [fuel] Component Leads Elections

2016-04-07 Thread Vladimir Kozhukalov
Dear colleagues,

Looks like we have consensus (lazy, but still consensus)
on this topic: we don't need this role CL exposed to Fuel
project. I have prepared a change [1] for our team structure
policy.

My suggestion is to make Fuel is an aggregator
of independent components. Component teams could have their
formal or informal leads (i.e. component PTL) if needed
but it is irrelevant to Fuel as a whole.

As far as Fuel features usually require coordinated changes
in multiple components, we need all Fuel specs to be reviewed
by engineers from different backgrounds.

"Avengers" approach (described above) has been rejected
by Openstack Infra team, but we can use more traditional
core group approach. I.e. Fuel-specs core team is responsible for
reviewing and merging specs and in the proposed patch [1]
it is explicitly written down that each spec must be
approved by at least Puppet, UI, REST SMEs.
It is also a responsibility of Fuel-specs core group
to involve other SMEs if needed.

[1] https://review.openstack.org/#/c/301194/



Vladimir Kozhukalov

On Thu, Mar 31, 2016 at 6:47 PM, Serg Melikyan 
wrote:

> Hi fuelers,
>
> only few hours left until period of self-nomination will be closed, but so
> far we don't have neither consensus regarding how to proceed further nor
> candidates.
>
> I've increased period of self-nomination for another week (until April 7,
> 23:59 UTC) and expect to have decision about how we are going to proceed
> further if no one nominate himself or candidates for each of the three
> projects.
>
> I propose to start with defining steps that we are going to take if no one
> nominate himself by April 7 and move forward with separate discussion
> regarding governance.
>
> P.S. I strongly believe that declaring Component Leads role as obsolete
> require agreement among all members of Fuel team, which may take quite a
> lot of time. I think we should propose change-request to existing spec with
> governance [0], and have decision by end of Newton cycle.
>
> References:
> [0]
> https://specs.openstack.org/openstack/fuel-specs/policy/team-structure.html
>
> On Thu, Mar 31, 2016 at 3:22 AM, Evgeniy L  wrote:
>
>> Hi,
>>
>> I'm not sure if it's a right place to continue this discussion, but if
>> there are doubts that such role is needed, we should not wait for another
>> half a year to drop it.
>>
>> Also I'm not sure if a single engineer (or two engineers) can handle
>> majority of upcoming patches + specs + meetings around features. Sergii and
>> Igor put a lot of efforts to make it work, but does it really scale?
>>
>> I think it would be better to offload more responsibilities to core
>> groups, and if core team (of specific project) wants to see formal or
>> informal leader, let them decide.
>>
>> I would be really interested to see feedback from current component leads.
>>
>> Thanks,
>>
>>
>> On Wed, Mar 30, 2016 at 2:20 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dmitry,
>>>
>>> "No need to rush" does not mean we should postpone
>>> team structure changes until Ocata. IMO, CL role
>>> (when it is exposed to Fuel) contradicts to our
>>> modularization activities. Fuel should be an aggregator
>>> of components. What if we decide to use Ironic or
>>> Neutron as Fuel components? Should we chose also
>>> Ironic CL? NO! Ironic is an independent
>>> project with its own PTL.
>>>
>>> I agree with Mike that we could remove this CL
>>> role in a month if have consensus. But does it
>>> make any sense to chose CLs now and then
>>> immediately remove this role? Probably, it is better
>>> to make a decision right now. I'd really like to
>>> see here in this ML thread opinions of our current
>>> CLs and other people.
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Tue, Mar 29, 2016 at 11:21 PM, Dmitry Borodaenko <
>>> dborodae...@mirantis.com> wrote:
>>>
 On Tue, Mar 29, 2016 at 03:19:27PM +0300, Vladimir Kozhukalov wrote:
 > > I think this call is too late to change a structure for now. I
 suggest
 > > that we always respect the policy we've accepted, and follow it.
 > >
 > > If Component Leads role is under a question, then I'd continue the
 > > discussion, hear opinion of current component leads, and give this
 a time
 > > to be discussed. I'd have nothing against removing this role in a
 month
 > > from now if we reach a consensus on this topic - no need to wait
 for the
 > > cycle end.
 >
 > Sure, there is no need to rush. I'd also like to see current CL
 opinions.

 Considering that, while there's an ongoing discussion on how to change
 Fuel team structure for Ocata, there's also an apparent consensus that
 we still want to have component leads for Newton, I'd like to call once
 again for volunteers to self-nominate for component leads of
 fuel-library, fuel-web, and fuel-ui. We've got 2 days left until
 nomination period is over, and no 

Re: [openstack-dev] [Fuel] Merge Freeze for Mitaka branching

2016-04-06 Thread Aleksandra Fedorova
Hi, everyone,

we were delayed by npm issue [0] in the gate, but currently we have
successfully merged all version bumps [1] and got stable master,
thanks to Sergey Kulanov who got it all fully tested in advance.

Merge Freeze is lifted.

Please note:

* To merge change to Mitaka release, you need to merge it to master
branch first and then cherry-pick to stable/mitaka branch.

* Fuel CI deployment tests are being adjusted to new mirrors schema
[2] so currently all master deployment tests are queued. We need about
1-2 hours to finish this work. We'll send a separate e-mail regarding
Fuel CI readiness once we are done.


[0] https://storyboard.openstack.org/#!/story/2000541
[1] https://review.openstack.org/#/q/topic:9.0-scf
[2] https://review.openstack.org/#/c/301018/

-- 
Aleksandra Fedorova
Fuel CI Team Lead
bookwar at #fuel-infra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] branching Mitaka April 6, 2016

2016-04-05 Thread Igor Belikov
Hi Sergey,

According to Matthew Mosesohn the plan is to delay branching of detach-* 
plugins.
The only plugin scheduled for branching tomorrow seems to be a 
fuel-plugin-murano.
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com

> On 04 Apr 2016, at 15:11, Sergii Golovatiuk  wrote:
> 
> What about plugins? 
> 
> For instance: fuel-plugin-detach-keystone
> 
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
> 
> On Mon, Apr 4, 2016 at 1:44 PM, Igor Belikov  > wrote:
> Hi,
> 
> Fuel SCF will be taking place on April 6th, this means that we’re going to 
> create stable/mitaka branches for a number of Fuel repos.
> 
> PLEASE, take a look at the following list and respond if you think your 
> project should be included or excluded from the list:
> * fuel-agent
> * fuel-astute
> * fuel-library
> * fuel-main
> * fuel-menu
> * fuel-mirror
> * fuel-nailgun-agent
> * fuel-noop-fixtures
> * fuel-octane
> * fuel-ostf
> * fuel-plugin-murano
> * fuel-qa
> * fuel-ui
> * fuel-upgrade
> * fuel-virtualbox
> * fuel-web
> * network-checker
> * python-fuelclient
> * shotgun
> 
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] Update of astute.yaml fixtures and noop tests

2016-04-05 Thread Aleksandr Didenko
Hi folks,

we've merged all the changes related to fixtures update [0] and bugfix to
unblock noop tests [1]. So if you see -1 from fuel_noop_tests [2] in tests
not related to your patch, then please rebase.

Regards,
Alex

[0] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
[1] https://review.openstack.org/301107
[2] https://ci.fuel-infra.org/job/fuellib_noop_tests/

On Fri, Apr 1, 2016 at 7:16 PM, Vladimir Kuklin 
wrote:

> Hi Alex
>
> +1 to your proposal - this is long-awaited change.
>
> On Fri, Apr 1, 2016 at 6:01 PM, Aleksandr Didenko 
> wrote:
>
>> One more thing about spec to fixture mapping [0]. What if instead of:
>>
>> # RUN: (hiera1) (facts1)
>>
>> we'll use
>>
>> # RUN: (roles_array1) (facts1)
>>
>> ?
>>
>> We don't need to duplicate complicated task graph calculations to
>> understand which task to execute, because we don't care about tasks
>> ordering and dependencies in noop tests. All we need is to map rspec task
>> tests to astute.yaml fixtures. And it could be done via roles.
>>
>> Regards,
>> Alex
>>
>> [0]
>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
>>
>>
>> On Fri, Apr 1, 2016 at 4:05 PM, Aleksandr Didenko 
>> wrote:
>>
>>> Hi.
>>>
>>>   As you may know, we're still using some very old astute.yaml fixtures
>>> (v6.1) in our 'master' (v9.0) noop rspec tests [0]. Besides that, we have
>>> problems with fixture-to-rspec mapping [1]. So we've started to work on
>>> those problems [2].
>>>
>>>   So please be aware of upcoming changes in noop rspec fixtures and
>>> tests. If you see, that some important fixtures are missing (thus not
>>> covered by tests) please let me know in this email thread or via
>>> IRC/email/slack.
>>>
>>>   Also, we should stop updating astute.yaml fixtures manually and start
>>> using some kind of automation approach instead [3][4]. I propose to use [5]
>>> script until we find a better solution. So if you want to add some new
>>> astute.yaml fixture for noop tests, please propose a patch to this script
>>> instead of uploading yaml file.
>>>
>>> Currently the following is missing in the new set of fixtures for
>>> fuel-9.0:
>>> - generate_vms ('vms_conf' array in astute.yaml - I'm not sure how to
>>> properly enable it via nailgun, any help is much appreciated)
>>> - selective ssl fixtures - since configuration data is not serialized
>>> from nailgun, I think that we should move this into 'hiera/override' along
>>> with implementation of new hiera overrides tests workflow [6]
>>> - vmware related fixtures
>>>
>>> Please feel free to share your ideas/comments on this topic.
>>>
>>> Thanks,
>>> Alex
>>>
>>> [0] https://bugs.launchpad.net/fuel/+bug/1535339
>>> [1]
>>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
>>> [2] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
>>> [3]
>>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/fixtures.rst
>>> [4]
>>> https://blueprints.launchpad.net/fuel/+spec/deployment-dryrun-fixtures-generator
>>> [5]
>>> https://github.com/openstack/fuel-noop-fixtures/blob/master/utils/generate_yamls.sh
>>> [6] https://bugs.launchpad.net/fuel/+bug/1564919
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Bareon][Ironic] The future of integration module

2016-04-05 Thread Oleksandr Berezovskyi
Hello,

At the beginning of the work, we've taken fuel-agent driver from Ironic
team and customized it.
Here is main features, which were created during development for Cray (all
of them are now part of bareon-ironic):

   1. deploy-config could be stored in multiple places (image meta,
   instance meta and node meta) and top-level attributes are being merged
   according to priorities in ironic.conf;
   2. default deploy-config support;
   3. support for both two-image (kernel+ramdisk) and three-image
   (kernel+ramdisk+squashfs) agents boot;
   4. rsync deployment support (insecure and secure);
   5. kernel parameters for tenant image and pxe boot could be appended
   with info from ironic.conf;
   6. pulling of agent log in case of unsuccessful deployment;
   7. on-fail script support (list of actions, which are being executed in
   case of unsuccessful deployment);
   8. actions support (set of actions being executed):
  1. actions at deployment time (after deployment);
  2. actions during node lifetime via vendor-passthru interface;
   9. compatibility check between agent version and driver version;
   10. deployment timeout mechanism;
   11. deployment termination mechanism (requires patches to nova and
   ironic);
   12. multi-boot feature (multiple OSes could be deployed to implement
   quick switch).

In case of any questions feel free to ask.

Best regards,
Oleksandr

On Mon, Mar 21, 2016 at 1:33 PM, Evgeniy L  wrote:

> Hi,
>
> I would like to bring up discussion on Bareon [0] and Ironic integration
> and plans for the future.
>
> But first let me provide background information on the topic. Bareon is
> partitioning/provisioning system [1] which is based on Fuel-agent [2],
> currently it's in active development and will be used in Fuel 10.0 instead
> of Fuel-agent (as partitioning/provisioning system).
>
> There is integration module for Bareon and Ironic [3], based on this
> module Cray team implemented another version [4], which is going to be
> merged in separate repository.
>
> Ideally there should be a single module which is used for Bareon and
> Ironic integration.
> In order to do that, the differences has to be identified and based on
> that decisions are made (on deprecation and/or backports).
>
> I would like to ask for help from Cray and Ironic (Ironic -
> Bareon/Fuel-agent maintainers) teams to identify the differences and help
> with future plans on integration.
>
> Thanks,
>
> [0] https://wiki.openstack.org/wiki/Bareon
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082397.html
> [2] https://github.com/openstack/fuel-agent
> [3]
> https://github.com/openstack/bareon/tree/master/contrib/ironic/ironic-fa-deploy
> [4] https://review.openstack.org/#/c/286550/
>
>


-- 
Best regards,

Oleksandr Berezovskyi
Software Engineer, Mirantis, Inc.

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com

cell: +380938745251
oberezovs...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Unrelated changes in patches

2016-04-04 Thread Dmitry Borodaenko
On Mon, Apr 04, 2016 at 04:05:28PM +0300, Matthew Mosesohn wrote:
> I've seen several cases where core reviewers bully contributors into
> refactoring a particular piece of logic because it contains common
> lines relating to some non-ideal code, even if the change doesn't
> relate to this logic.
> In general, I'm ok with formatting issues, but changing how a piece of
> existing code works is over the line. It should be handled as a
> separate bug.

It's a judgement call, not a clear either-or. Core reviewers are people
who know better than others when particular code needs refactoring, and
they are more motivated than others to get it refactored, but if they
end up being the only ones ever doing refactoring, they end up
overwhelmed and the code rots.

So I think it's ok for core reviewers to enourage (although definitely
not to bully) other contributors to include well-isolated refactorings
with functional changes. The deciding factor shouldn't be whether the
changes are at all related to the bug in question, because this can and
will be taken ad absurdum and will encourage irresponsible patches that
quickly fix bugs by multiplying technical debt.

The deciding factor should be how much risk and how much additional
burden on reviewers would the requested refactoring add to the commit.
If it makes it easier to understand the affected code and doesn't have
functional impact outside of scope of the review, it's worth including
in the commit. If it has non-trivial functional impact, it can't really
be called a refactoring anyway, and in that case it does need a separate
bug or blueprint.

> But yes, in general, if someone complains about something unrelated to
> your patch, he or she should just file a bug with what is required.
> 
> -Matthew
> 
> 
> On Mon, Apr 4, 2016 at 3:46 PM, Dmitry Guryanov  
> wrote:
> > Hello, colleagues!
> >
> > It's often not so easy to decide, if you should include some unrelated
> > changes to your patch, like fixing spaces, renaming variables or something
> > else, which don't change logic. On the one hand you see something's wrong
> > with the code and you'd like to fix it, on the other hand reviewers can vote
> > of -1 and you'll have to fix you patch and upload it again and this is very
> > annoying. You can also create separate review for such changes, but it will
> > require additional effort from you and reviewers.
> >
> > If you are a reviewer, and you've noted unrelated changes you may hesitate,
> > if you should ask an author to remove them and upload new version of the
> > patch or not. Also such extra changes may confuse you sometimes.
> >
> > So I suggest creating separate patches for unrelated changes if they add new
> > chucks to patch. And I'd like to ask authors to clearly state in the subject
> > of a commit message, that this patch just fixes formatting. And reviewers
> > shouldn't check such patches too severely, so that they'll get into repo as
> > soon as possible.
> >
> > What do you think?
> >
> >
> > --
> > Dmitry Guryanov
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Unrelated changes in patches

2016-04-04 Thread Igor Kalnitsky
Dmitry Guryanov wrote:
> It's often not so easy to decide, if you should include some unrelated
> changes to your patch, like fixing spaces, renaming variables or
> something else, which don't change logic.

I'd say it depends. If, for example, variable name is used inside one
function - it's ok to rename it within a patch. On the other hand, if
this variable is used across the code and it requires to change it in
few places - I'd prefer to do not do it within the patch. Any
unrelated change complicates review (if we're taking about thorough
review).

The things go worse when patch authors tries to implement two business
changes in one patch. In that case, it's really hard to distinguish
those both changes one from another in order to understand what's
going on.

So generally I'd prefer to see all unrelated changes in separate
patches. It's not necessary to create a bug for them, it's ok to
submit them with detailed commit message why this should be done.

Dmitry Guryanov wrote:
> On the one hand you see something's wrong with the code and you'd like
> to fix it, on the other hand reviewers can vote of -1 and you'll have
> to fix you patch and upload it again and this is very annoying.

You can fix it in first patch, and make *business* changes in the second one.

Dmitry Guryanov wrote:
> You can also create separate review for such changes, but it will
> require additional effort from you and reviewers.

As reviewer I can say: I'm spending more time trying to figure out
what's going on in patch that changes two (or even more) unrelated
things, than I'd spend if I review those changes in independent
patches.

On Mon, Apr 4, 2016 at 3:46 PM, Dmitry Guryanov  wrote:
> Hello, colleagues!
>
> It's often not so easy to decide, if you should include some unrelated
> changes to your patch, like fixing spaces, renaming variables or something
> else, which don't change logic. On the one hand you see something's wrong
> with the code and you'd like to fix it, on the other hand reviewers can vote
> of -1 and you'll have to fix you patch and upload it again and this is very
> annoying. You can also create separate review for such changes, but it will
> require additional effort from you and reviewers.
>
> If you are a reviewer, and you've noted unrelated changes you may hesitate,
> if you should ask an author to remove them and upload new version of the
> patch or not. Also such extra changes may confuse you sometimes.
>
> So I suggest creating separate patches for unrelated changes if they add new
> chucks to patch. And I'd like to ask authors to clearly state in the subject
> of a commit message, that this patch just fixes formatting. And reviewers
> shouldn't check such patches too severely, so that they'll get into repo as
> soon as possible.
>
> What do you think?
>
>
> --
> Dmitry Guryanov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Unrelated changes in patches

2016-04-04 Thread Jason Rist
On 04/04/2016 07:05 AM, Matthew Mosesohn wrote:
> Hi Dmitry,
>
> I've seen several cases where core reviewers bully contributors into
> refactoring a particular piece of logic because it contains common
> lines relating to some non-ideal code, even if the change doesn't
> relate to this logic.
> In general, I'm ok with formatting issues, but changing how a piece of
> existing code works is over the line. It should be handled as a
> separate bug.
>
> But yes, in general, if someone complains about something unrelated to
> your patch, he or she should just file a bug with what is required.
>
> -Matthew
>
>
> On Mon, Apr 4, 2016 at 3:46 PM, Dmitry Guryanov  
> wrote:
> > Hello, colleagues!
> >
> > It's often not so easy to decide, if you should include some unrelated
> > changes to your patch, like fixing spaces, renaming variables or something
> > else, which don't change logic. On the one hand you see something's wrong
> > with the code and you'd like to fix it, on the other hand reviewers can vote
> > of -1 and you'll have to fix you patch and upload it again and this is very
> > annoying. You can also create separate review for such changes, but it will
> > require additional effort from you and reviewers.
> >
> > If you are a reviewer, and you've noted unrelated changes you may hesitate,
> > if you should ask an author to remove them and upload new version of the
> > patch or not. Also such extra changes may confuse you sometimes.
> >
> > So I suggest creating separate patches for unrelated changes if they add new
> > chucks to patch. And I'd like to ask authors to clearly state in the subject
> > of a commit message, that this patch just fixes formatting. And reviewers
> > shouldn't check such patches too severely, so that they'll get into repo as
> > soon as possible.
> >
> > What do you think?
> >
> >
> > --
> > Dmitry Guryanov
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
I agree with Matthew, but huge +1 to separate patch/bug for 
formatting/whitespace issues.

-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Unrelated changes in patches

2016-04-04 Thread Matthew Mosesohn
Hi Dmitry,

I've seen several cases where core reviewers bully contributors into
refactoring a particular piece of logic because it contains common
lines relating to some non-ideal code, even if the change doesn't
relate to this logic.
In general, I'm ok with formatting issues, but changing how a piece of
existing code works is over the line. It should be handled as a
separate bug.

But yes, in general, if someone complains about something unrelated to
your patch, he or she should just file a bug with what is required.

-Matthew


On Mon, Apr 4, 2016 at 3:46 PM, Dmitry Guryanov  wrote:
> Hello, colleagues!
>
> It's often not so easy to decide, if you should include some unrelated
> changes to your patch, like fixing spaces, renaming variables or something
> else, which don't change logic. On the one hand you see something's wrong
> with the code and you'd like to fix it, on the other hand reviewers can vote
> of -1 and you'll have to fix you patch and upload it again and this is very
> annoying. You can also create separate review for such changes, but it will
> require additional effort from you and reviewers.
>
> If you are a reviewer, and you've noted unrelated changes you may hesitate,
> if you should ask an author to remove them and upload new version of the
> patch or not. Also such extra changes may confuse you sometimes.
>
> So I suggest creating separate patches for unrelated changes if they add new
> chucks to patch. And I'd like to ask authors to clearly state in the subject
> of a commit message, that this patch just fixes formatting. And reviewers
> shouldn't check such patches too severely, so that they'll get into repo as
> soon as possible.
>
> What do you think?
>
>
> --
> Dmitry Guryanov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] branching Mitaka April 6, 2016

2016-04-04 Thread Sergii Golovatiuk
What about plugins?

For instance: fuel-plugin-detach-keystone

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Apr 4, 2016 at 1:44 PM, Igor Belikov  wrote:

> Hi,
>
> Fuel SCF will be taking place on April 6th, this means that we’re going to
> create stable/mitaka branches for a number of Fuel repos.
>
> PLEASE, take a look at the following list and respond if you think your
> project should be included or excluded from the list:
> * fuel-agent
> * fuel-astute
> * fuel-library
> * fuel-main
> * fuel-menu
> * fuel-mirror
> * fuel-nailgun-agent
> * fuel-noop-fixtures
> * fuel-octane
> * fuel-ostf
> * fuel-plugin-murano
> * fuel-qa
> * fuel-ui
> * fuel-upgrade
> * fuel-virtualbox
> * fuel-web
> * network-checker
> * python-fuelclient
> * shotgun
>
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] branching Mitaka April 6, 2016

2016-04-04 Thread Sergey Kulanov
Hi,

Igor thank you for update.

Folks, I also kindly ask to review the list of patch-sets regarding SCF [1]

We've already built custom iso (with [1]) which passed BVT tests

[1]. https://review.openstack.org/#/q/topic:9.0-scf,n,z

2016-04-04 14:44 GMT+03:00 Igor Belikov :

> Hi,
>
> Fuel SCF will be taking place on April 6th, this means that we’re going to
> create stable/mitaka branches for a number of Fuel repos.
>
> PLEASE, take a look at the following list and respond if you think your
> project should be included or excluded from the list:
> * fuel-agent
> * fuel-astute
> * fuel-library
> * fuel-main
> * fuel-menu
> * fuel-mirror
> * fuel-nailgun-agent
> * fuel-noop-fixtures
> * fuel-octane
> * fuel-ostf
> * fuel-plugin-murano
> * fuel-qa
> * fuel-ui
> * fuel-upgrade
> * fuel-virtualbox
> * fuel-web
> * network-checker
> * python-fuelclient
> * shotgun
>
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sergey
DevOps Engineer
IRC: SergK
Skype: Sergey_kul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] Update of astute.yaml fixtures and noop tests

2016-04-01 Thread Vladimir Kuklin
Hi Alex

+1 to your proposal - this is long-awaited change.

On Fri, Apr 1, 2016 at 6:01 PM, Aleksandr Didenko 
wrote:

> One more thing about spec to fixture mapping [0]. What if instead of:
>
> # RUN: (hiera1) (facts1)
>
> we'll use
>
> # RUN: (roles_array1) (facts1)
>
> ?
>
> We don't need to duplicate complicated task graph calculations to
> understand which task to execute, because we don't care about tasks
> ordering and dependencies in noop tests. All we need is to map rspec task
> tests to astute.yaml fixtures. And it could be done via roles.
>
> Regards,
> Alex
>
> [0]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
>
>
> On Fri, Apr 1, 2016 at 4:05 PM, Aleksandr Didenko 
> wrote:
>
>> Hi.
>>
>>   As you may know, we're still using some very old astute.yaml fixtures
>> (v6.1) in our 'master' (v9.0) noop rspec tests [0]. Besides that, we have
>> problems with fixture-to-rspec mapping [1]. So we've started to work on
>> those problems [2].
>>
>>   So please be aware of upcoming changes in noop rspec fixtures and
>> tests. If you see, that some important fixtures are missing (thus not
>> covered by tests) please let me know in this email thread or via
>> IRC/email/slack.
>>
>>   Also, we should stop updating astute.yaml fixtures manually and start
>> using some kind of automation approach instead [3][4]. I propose to use [5]
>> script until we find a better solution. So if you want to add some new
>> astute.yaml fixture for noop tests, please propose a patch to this script
>> instead of uploading yaml file.
>>
>> Currently the following is missing in the new set of fixtures for
>> fuel-9.0:
>> - generate_vms ('vms_conf' array in astute.yaml - I'm not sure how to
>> properly enable it via nailgun, any help is much appreciated)
>> - selective ssl fixtures - since configuration data is not serialized
>> from nailgun, I think that we should move this into 'hiera/override' along
>> with implementation of new hiera overrides tests workflow [6]
>> - vmware related fixtures
>>
>> Please feel free to share your ideas/comments on this topic.
>>
>> Thanks,
>> Alex
>>
>> [0] https://bugs.launchpad.net/fuel/+bug/1535339
>> [1]
>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
>> [2] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
>> [3]
>> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/fixtures.rst
>> [4]
>> https://blueprints.launchpad.net/fuel/+spec/deployment-dryrun-fixtures-generator
>> [5]
>> https://github.com/openstack/fuel-noop-fixtures/blob/master/utils/generate_yamls.sh
>> [6] https://bugs.launchpad.net/fuel/+bug/1564919
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] Update of astute.yaml fixtures and noop tests

2016-04-01 Thread Aleksandr Didenko
One more thing about spec to fixture mapping [0]. What if instead of:

# RUN: (hiera1) (facts1)

we'll use

# RUN: (roles_array1) (facts1)

?

We don't need to duplicate complicated task graph calculations to
understand which task to execute, because we don't care about tasks
ordering and dependencies in noop tests. All we need is to map rspec task
tests to astute.yaml fixtures. And it could be done via roles.

Regards,
Alex

[0]
https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations


On Fri, Apr 1, 2016 at 4:05 PM, Aleksandr Didenko 
wrote:

> Hi.
>
>   As you may know, we're still using some very old astute.yaml fixtures
> (v6.1) in our 'master' (v9.0) noop rspec tests [0]. Besides that, we have
> problems with fixture-to-rspec mapping [1]. So we've started to work on
> those problems [2].
>
>   So please be aware of upcoming changes in noop rspec fixtures and tests.
> If you see, that some important fixtures are missing (thus not covered by
> tests) please let me know in this email thread or via IRC/email/slack.
>
>   Also, we should stop updating astute.yaml fixtures manually and start
> using some kind of automation approach instead [3][4]. I propose to use [5]
> script until we find a better solution. So if you want to add some new
> astute.yaml fixture for noop tests, please propose a patch to this script
> instead of uploading yaml file.
>
> Currently the following is missing in the new set of fixtures for fuel-9.0:
> - generate_vms ('vms_conf' array in astute.yaml - I'm not sure how to
> properly enable it via nailgun, any help is much appreciated)
> - selective ssl fixtures - since configuration data is not serialized from
> nailgun, I think that we should move this into 'hiera/override' along with
> implementation of new hiera overrides tests workflow [6]
> - vmware related fixtures
>
> Please feel free to share your ideas/comments on this topic.
>
> Thanks,
> Alex
>
> [0] https://bugs.launchpad.net/fuel/+bug/1535339
> [1]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/usage.rst#spec-file-annotations
> [2] https://review.openstack.org/#/q/topic:update-fixtures-to-9.0
> [3]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/doc/fixtures.rst
> [4]
> https://blueprints.launchpad.net/fuel/+spec/deployment-dryrun-fixtures-generator
> [5]
> https://github.com/openstack/fuel-noop-fixtures/blob/master/utils/generate_yamls.sh
> [6] https://bugs.launchpad.net/fuel/+bug/1564919
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][ConfigDB] Separating node and cluster serialized data

2016-04-01 Thread Oleg Gelbukh
Bogdan,

I mostly agree with you on this. The only data that might originate from a
node is discovery-related parameters, like CPU/disks/NICs architecture and
such.

However, at the moment the deployment data is partially generated at every
node (i.e. globals.yaml, override/plugins/* and some other files) and is
not exposed in any way externally. But since this data is required to
integrate with 3rd-party configuration management tools, we create an
interim solution to make them available 'as is'.

This situation should change in the next few months, and then nodes shall
be moved to purely consumer role in the deployment data pipeline.

--
Best regards,
Oleg Gelbukh

On Fri, Apr 1, 2016 at 1:37 PM, Bogdan Dobrelya 
wrote:

> On 04/01/2016 10:41 AM, Oleg Gelbukh wrote:
> > Andrew,
> >
> > This is an excellent idea. It is apparently more efficient and
> > error-proof to make the split not by the resulted data but at the time
> > it is actually generated. We will play with this idea a little bit, and
> > will come up with design proposal shortly.
> >
> > Meanwhile, please be informed that we already started testing the
> > solution based on the node-level data exposed via ConfigDB API extension
> > for Nailgun [1] [2]. I will keep you updated on our progress in that
> area.
>
> I strongly believe that nodes must only consume data, not provide one.
> And the data must be collected from its sources, which is Nailgun API
> extensions, like Andrew described.
>
> >
> > [1] Specification for Nailgun API for serialized facts
> > 
> > [2] Spec for upload of deployment configuration to ConfigDB API
> > 
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Thu, Mar 31, 2016 at 11:19 PM, Andrew Woodward  > > wrote:
> >
> > One of the problems we've faced with trying to plug-in ConfigDB is
> > trying to separate the cluster attributes from the node attributes
> > in the serialized output (ie astute.yaml)
> >
> > I started talking with Alex S about how we could separate them after
> > astute.yaml is prepared trying to ensure which was which we came
> > back uncertain that the results would be accurate.
> >
> > So I figured I'd go back to the source and see if there was a way to
> > know which keys belonged where. It turns out that we could solve the
> > problem in a simpler and more precise way than cutting them back
> > apart later.
> >
> > Looking over the deployment_serializers.py [1] the serialized data
> > follows a simple work flow
> >
> > iterate over every node in cluster
> >   if node is customized:
> > serialized_data = node.replaced_deployment_data
> >   else:
> > serialized_data = dict_merge(
> >   serialize_node(node),
> >   get_common_attrs(cluster))
> >
> > Taking this into mind, we can simply construct an extension to
> > expose these as an APIs so that we can consume them as a task in the
> > deployment graph.
> >
> > Cluster:
> > We can simply expose
> > DeploymentMultinodeSerializer().get_common_attrs(cluster)
> >
> > This would then be plumbed to the cluster level in ConfigDB
> >
> > Node:
> > if a Node has customized data, then we can return that at the node
> > level, this continues to work at the same as native since it most
> > likely has Cluster merged into it.
> >
> > otherwise we can return the serialized node with whichever of the
> > first 'role' the node has
> >
> > We would expose DeploymentMultinodeSerializer().serialize_node(node,
> > objects.Node.all_roles(node)[0])
> >
> > for our usage, we don't need to worry about the normal node role
> > combination as the data only influences 'role' and 'fail_if_error'
> > attributes, both are not consumed in the library.
> >
> >
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L93-L121
> > --
> >
> > --
> >
> > Andrew Woodward
> >
> > Mirantis
> >
> > Fuel Community Ambassador
> >
> > Ceph Community
> >
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc 

Re: [openstack-dev] [fuel][ConfigDB] Separating node and cluster serialized data

2016-04-01 Thread Bogdan Dobrelya
On 04/01/2016 10:41 AM, Oleg Gelbukh wrote:
> Andrew,
> 
> This is an excellent idea. It is apparently more efficient and
> error-proof to make the split not by the resulted data but at the time
> it is actually generated. We will play with this idea a little bit, and
> will come up with design proposal shortly.
> 
> Meanwhile, please be informed that we already started testing the
> solution based on the node-level data exposed via ConfigDB API extension
> for Nailgun [1] [2]. I will keep you updated on our progress in that area.

I strongly believe that nodes must only consume data, not provide one.
And the data must be collected from its sources, which is Nailgun API
extensions, like Andrew described.

> 
> [1] Specification for Nailgun API for serialized facts
> 
> [2] Spec for upload of deployment configuration to ConfigDB API
> 
> 
> --
> Best regards,
> Oleg Gelbukh
> 
> On Thu, Mar 31, 2016 at 11:19 PM, Andrew Woodward  > wrote:
> 
> One of the problems we've faced with trying to plug-in ConfigDB is
> trying to separate the cluster attributes from the node attributes
> in the serialized output (ie astute.yaml)
> 
> I started talking with Alex S about how we could separate them after
> astute.yaml is prepared trying to ensure which was which we came
> back uncertain that the results would be accurate.
> 
> So I figured I'd go back to the source and see if there was a way to
> know which keys belonged where. It turns out that we could solve the
> problem in a simpler and more precise way than cutting them back
> apart later.
> 
> Looking over the deployment_serializers.py [1] the serialized data
> follows a simple work flow
> 
> iterate over every node in cluster
>   if node is customized:
> serialized_data = node.replaced_deployment_data
>   else:
> serialized_data = dict_merge(
>   serialize_node(node),
>   get_common_attrs(cluster))
> 
> Taking this into mind, we can simply construct an extension to
> expose these as an APIs so that we can consume them as a task in the
> deployment graph.
> 
> Cluster:
> We can simply expose
> DeploymentMultinodeSerializer().get_common_attrs(cluster)
> 
> This would then be plumbed to the cluster level in ConfigDB
> 
> Node:
> if a Node has customized data, then we can return that at the node
> level, this continues to work at the same as native since it most
> likely has Cluster merged into it.
> 
> otherwise we can return the serialized node with whichever of the
> first 'role' the node has
> 
> We would expose DeploymentMultinodeSerializer().serialize_node(node,
> objects.Node.all_roles(node)[0])
> 
> for our usage, we don't need to worry about the normal node role
> combination as the data only influences 'role' and 'fail_if_error'
> attributes, both are not consumed in the library.
> 
> 
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L93-L121
> -- 
> 
> --
> 
> Andrew Woodward
> 
> Mirantis
> 
> Fuel Community Ambassador
> 
> Ceph Community
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][ConfigDB] Separating node and cluster serialized data

2016-04-01 Thread Oleg Gelbukh
Andrew,

This is an excellent idea. It is apparently more efficient and error-proof
to make the split not by the resulted data but at the time it is actually
generated. We will play with this idea a little bit, and will come up with
design proposal shortly.

Meanwhile, please be informed that we already started testing the solution
based on the node-level data exposed via ConfigDB API extension for Nailgun
[1] [2]. I will keep you updated on our progress in that area.

[1] Specification for Nailgun API for serialized facts

[2] Spec for upload of deployment configuration to ConfigDB API


--
Best regards,
Oleg Gelbukh

On Thu, Mar 31, 2016 at 11:19 PM, Andrew Woodward  wrote:

> One of the problems we've faced with trying to plug-in ConfigDB is trying
> to separate the cluster attributes from the node attributes in the
> serialized output (ie astute.yaml)
>
> I started talking with Alex S about how we could separate them after
> astute.yaml is prepared trying to ensure which was which we came back
> uncertain that the results would be accurate.
>
> So I figured I'd go back to the source and see if there was a way to know
> which keys belonged where. It turns out that we could solve the problem in
> a simpler and more precise way than cutting them back apart later.
>
> Looking over the deployment_serializers.py [1] the serialized data follows
> a simple work flow
>
> iterate over every node in cluster
>   if node is customized:
> serialized_data = node.replaced_deployment_data
>   else:
> serialized_data = dict_merge(
>   serialize_node(node),
>   get_common_attrs(cluster))
>
> Taking this into mind, we can simply construct an extension to expose
> these as an APIs so that we can consume them as a task in the deployment
> graph.
>
> Cluster:
> We can simply expose
> DeploymentMultinodeSerializer().get_common_attrs(cluster)
>
> This would then be plumbed to the cluster level in ConfigDB
>
> Node:
> if a Node has customized data, then we can return that at the node level,
> this continues to work at the same as native since it most likely has
> Cluster merged into it.
>
> otherwise we can return the serialized node with whichever of the first
> 'role' the node has
>
> We would expose DeploymentMultinodeSerializer().serialize_node(node,
> objects.Node.all_roles(node)[0])
>
> for our usage, we don't need to worry about the normal node role
> combination as the data only influences 'role' and 'fail_if_error'
> attributes, both are not consumed in the library.
>
>
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L93-L121
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Component Leads Elections

2016-03-31 Thread Serg Melikyan
Hi fuelers,

only few hours left until period of self-nomination will be closed, but so
far we don't have neither consensus regarding how to proceed further nor
candidates.

I've increased period of self-nomination for another week (until April 7,
23:59 UTC) and expect to have decision about how we are going to proceed
further if no one nominate himself or candidates for each of the three
projects.

I propose to start with defining steps that we are going to take if no one
nominate himself by April 7 and move forward with separate discussion
regarding governance.

P.S. I strongly believe that declaring Component Leads role as obsolete
require agreement among all members of Fuel team, which may take quite a
lot of time. I think we should propose change-request to existing spec with
governance [0], and have decision by end of Newton cycle.

References:
[0]
https://specs.openstack.org/openstack/fuel-specs/policy/team-structure.html

On Thu, Mar 31, 2016 at 3:22 AM, Evgeniy L  wrote:

> Hi,
>
> I'm not sure if it's a right place to continue this discussion, but if
> there are doubts that such role is needed, we should not wait for another
> half a year to drop it.
>
> Also I'm not sure if a single engineer (or two engineers) can handle
> majority of upcoming patches + specs + meetings around features. Sergii and
> Igor put a lot of efforts to make it work, but does it really scale?
>
> I think it would be better to offload more responsibilities to core
> groups, and if core team (of specific project) wants to see formal or
> informal leader, let them decide.
>
> I would be really interested to see feedback from current component leads.
>
> Thanks,
>
>
> On Wed, Mar 30, 2016 at 2:20 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dmitry,
>>
>> "No need to rush" does not mean we should postpone
>> team structure changes until Ocata. IMO, CL role
>> (when it is exposed to Fuel) contradicts to our
>> modularization activities. Fuel should be an aggregator
>> of components. What if we decide to use Ironic or
>> Neutron as Fuel components? Should we chose also
>> Ironic CL? NO! Ironic is an independent
>> project with its own PTL.
>>
>> I agree with Mike that we could remove this CL
>> role in a month if have consensus. But does it
>> make any sense to chose CLs now and then
>> immediately remove this role? Probably, it is better
>> to make a decision right now. I'd really like to
>> see here in this ML thread opinions of our current
>> CLs and other people.
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Tue, Mar 29, 2016 at 11:21 PM, Dmitry Borodaenko <
>> dborodae...@mirantis.com> wrote:
>>
>>> On Tue, Mar 29, 2016 at 03:19:27PM +0300, Vladimir Kozhukalov wrote:
>>> > > I think this call is too late to change a structure for now. I
>>> suggest
>>> > > that we always respect the policy we've accepted, and follow it.
>>> > >
>>> > > If Component Leads role is under a question, then I'd continue the
>>> > > discussion, hear opinion of current component leads, and give this a
>>> time
>>> > > to be discussed. I'd have nothing against removing this role in a
>>> month
>>> > > from now if we reach a consensus on this topic - no need to wait for
>>> the
>>> > > cycle end.
>>> >
>>> > Sure, there is no need to rush. I'd also like to see current CL
>>> opinions.
>>>
>>> Considering that, while there's an ongoing discussion on how to change
>>> Fuel team structure for Ocata, there's also an apparent consensus that
>>> we still want to have component leads for Newton, I'd like to call once
>>> again for volunteers to self-nominate for component leads of
>>> fuel-library, fuel-web, and fuel-ui. We've got 2 days left until
>>> nomination period is over, and no volunteer so far :(
>>>
>>> --
>>> Dmitry Borodaenko
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2016-03-31 Thread Bogdan Dobrelya
It is time for update!
The previous idea with the committed state and automatic cross-repo
merge hooks in zuul seems too complex to implement. So, the "CI gate for
blah blah" magically becomes now a manual helper tool for
reviewers/developers, see the docs update [0], [1].

You may start using it right now, as described in the docs. Hopefully,
it will help to visualize data changes for complex patches better.

[0] https://review.openstack.org/#/c/299912/
[1] http://goo.gl/Pj3lNf

> On 01.12.2015 11:28, Aleksandr Didenko wrote:
>> Hi,
>> 
>>> pregenerated catalogs for the Noop tests to become the very first
>>> committed state in the data regression process has to be put in the
>>> *separate repo*
>> 
>> +1 to that, we can put this new repo into .fixtures.yml
>> 
>>> note, we could as well move the tests/noop/astute.yaml/ there
>> 
>> +1 here too, astute.yaml files are basically configuration fixtures, we
>> can put them into .fixtures.yml as well
> 
> I found the better -and easier for patch authors- way to use the data
> regression checks. Originally suggested workflow was:
> 
> 1.
> "The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state."
> 
> This part remains the same with the only comment that the astute.yaml
> fixtures of deployment cases should be fetched from the
> fuel-noop-fixtures repo. And the committed state for generated catalogs
> should be
> stored there as well.
> 
> 2.
> "And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value."
> 
> This should be changed as following:
> - the data checks gate should be just a non voting helper for reviewers
> and patch authors. The only its task would be to show inducted data
> changes in a pretty and fast view to help accept/update/reject a patch
> on review.
> - the data checks gate job should fetch the committed data state from
> the fuel-noop-fixtures repo and run regressions check with the patch
> under review checked out on fuel-library repo.
> - the Noop tests gate should be changed to fetch the astute.yaml
> fixtures from the fuel-noop-fixtures repo in order to run noop tests as
> usual.
> 
> 3.
> "In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!"
> 
> Instead, the patch authors should do nothing additionally. Once accepted
> with wf+1, the patch on reivew should be merged with a pre-commit zuul
> hook (is it possible?). The hook should just regenerate catalogs with
> the changes introduced by the patch and update the committed state of
> data in the fuel-noop-fixtures repo. After that, the patch may be safely
> merged to the fuel-library and everything will be up to date with the
> committed data state.
> 
> 4.
> "The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!"
> 
> So this part would work even better now, with no additional actions
> required from the review process sides.
> 
>> 
>> Regards,
>> Alex
>> 
>> 
>> On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya > > wrote:
>> 
>> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
>> >> Hi,
>> >>
>> >> let me try to rephrase this a bit and Bogdan will correct me if
>> I'm wrong
>> >> or missing something.
>> >>
>> >> We have a set of top-scope manifests (called Fuel puppet tasks)
>> that we use
>> >> for OpenStack deployment. We execute those tasks with "puppet
>> apply". Each
>> >> task supposed to bring target system into some desired state, so
>> puppet
>> >> compiles a catalog and applies it. So basically, puppet catalog =
>> desired
>> >> system state.
>> >>
>> >> So we can compile* catalogs for all top-scope manifests in master
>> branch
>> >> and store those compiled* catalogs in fuel-library repo. Then for
>> each
>> >> proposed patch CI will compare new catalogs with stored ones and
>> print out
>> >> the difference if any. This will pretty much show what is going to be
>> >> changed in system configuration by proposed patch.
>> >>
>> >> We were discussing such checks before several times, iirc, but we
>> did not
>> >> have right tools to implement such thing before. Well, now we do
>> :) I think
>> >> it could be quite useful even in non-voting mode.
>>   

Re: [openstack-dev] [FUEL] Timeout of deployment is exceeded & Fuel does not autodetect my hardware.

2016-03-31 Thread Samer Machara
Hi, Sergii 
Here is the bug: https://bugs.launchpad.net/fuel/+bug/1564312 
with the diagnostic snapshot. 


- Original Message -
Sergii Golovatiuk sgolovatiuk at mirantis.com 
Thu Mar 31 09:09:39 UTC 2016 

* Previous message: [openstack-dev] [FUEL] Timeout of deployment is 
exceeded & Fuel does not autodetect my hardware. 
* Next message: [openstack-dev] [Heat] Re-evaluate conditions specification 
* Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] 

Hi Samer,


On Thu, Mar 31, 2016 at 10:18 AM, Samer Machara < samer.machara at 
telecom-sudparis.eu > wrote:

> Bonjour, Hello. > > I'm trying to deploy the basic 3 nodes architecture to 
> learn OpenStack: > 1 controller  and 2 compute nodes. After several hours of 
> deployment, I got > this error 'Timeout of deployment is exceeded.'. So I 
> tried to redeploy > it but without success. > I have installed Fuel 7.0 with 
> launch_8GB.sh script and I have 130GB > HHD-free and my Download speed is: 
> 6.41 Mbit/s. It should work well. > To be able to help I am asking to unveil 
> more details about your setup. The
most easiest way is to generate 'Diagnostic Snapshot' [1] and publish it on
some file hosting. The developers will be able to analyse the logs.

However, if it's a problem, I can give an advise how to start debugging by
yourself.
1. Start looking at the log of orchestrator to see what task failed and
why. Just look through
 /var/log/docker-logs/astute/astute.log
on master node.

2. If it's puppet task, see the puppet.log of failed node to see what
exactly happened
/var/log/docker-logs/remote/node-XXX.domain.tld/puppet-apply.log

That will give you more clarity to understand what has happened.

[1] 
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Here_is_how_you_file_a_bug
 

- Original Message -

From: "Samer Machara"  
To: "OpenStack Development Mailing List"  
Sent: Thursday, March 31, 2016 10:55:43 AM 
Subject: Re: [FUEL] Timeout of deployment is exceeded & Fuel does not 
autodetect my hardware. 

Hi, here is a Diagnostic Snapshot. 

- Original Message -

From: "Samer Machara"  
To: "OpenStack Development Mailing List"  
Sent: Thursday, March 31, 2016 10:18:59 AM 
Subject: [FUEL] Timeout of deployment is exceeded & Fuel does not autodetect my 
hardware. 

Bonjour, Hello. 

I'm trying to deploy the basic 3 nodes architecture to learn OpenStack: 1 
controller and 2 compute nodes. After several hours of deployment, I got this 
error 'Timeout of deployment is exceeded.'. So I tried to redeploy it but 
without success. 
I have installed Fuel 7.0 with launch_8GB.sh script and I have 130GB HHD-free 
and my Download speed is: 6.41 Mbit/s. It should work well. 

To see what is happening, I'm tried to deploy one node at the time I start with 
the controller node. However, I still with the same problem 'Timeout of 
deployment is exceeded.' 

Another thing, I remove a node by error and now Fuel does not autodetect it. 
Even I tried to add another node by cloning a VM and changing its MAC but Fuel 
does not detect it as well. All VMs are booting from the PXE. 
There is a command to execute the autodetection? 

Here is the astute.log 

Please help me to see what is going on 
Thanks in advance 
Samer. 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Timeout of deployment is exceeded & Fuel does not autodetect my hardware.

2016-03-31 Thread Sergii Golovatiuk
Hi,

According to logs I see manifest
/etc/puppet/modules/osnailyfacter/modular/ceilometer/controller.pp was
exceeded. However, at the same time I don't see any puppet.logs from
primary-controllers :\


Though analysing other tasks like database.pp it was executed for 10
minutes when it usually takes 1 minutes. So I assume that your computer for
Fuel lab is not powerful enough :\



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Thu, Mar 31, 2016 at 10:55 AM, Samer Machara <
samer.mach...@telecom-sudparis.eu> wrote:

> Hi, here is a Diagnostic Snapshot.
>
> --
> *From: *"Samer Machara" 
> *To: *"OpenStack Development Mailing List" <
> openstack-dev@lists.openstack.org>
> *Sent: *Thursday, March 31, 2016 10:18:59 AM
> *Subject: *[FUEL] Timeout of deployment is exceeded & Fuel does not
> autodetect my hardware.
>
>
> Bonjour, Hello.
>
> I'm trying to deploy the basic 3 nodes architecture to learn OpenStack:
> 1 controller  and 2 compute nodes. After several hours of deployment, I got
> this error 'Timeout of deployment is exceeded.'. So I tried to redeploy
> it but without success.
> I have installed Fuel 7.0 with launch_8GB.sh script and I have 130GB
> HHD-free and my Download speed is: 6.41 Mbit/s. It should work well.
>
> To see what is happening, I'm tried to deploy one node at the time I
> start with the controller node. However, I still with the same problem
> 'Timeout of deployment is exceeded.'
>
> Another thing, I remove a node by error and now Fuel does not autodetect
> it. Even I tried to add another node by cloning a VM and changing its MAC
> but Fuel does not detect it as well. All VMs are booting from the PXE.
> There is a command to execute the autodetection?
>
> Here is the astute.log
>
>
>  Please help me to see what is going on
> Thanks in advance
>Samer.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-03-31 Thread Evgeniy L
Hi,

Problems which I see with current Shotgun are:
1. Luck of parallelism, so it's not going to fetch data fast enough from
medium/big clouds.
2. There should be an easy way to run it manually (it's possible, but there
is no ready-to-use config), it would be really helpful in case if
Nailgun/Astute/MCollective are down.

As far as I know 1st is partly covered by Ansible, but the problem is it
executes a single task in parallel, so there is probability that lagging
node will slow down fetching from entire environment.
Also we will have to build a tool around Ansible to generate playbooks.

Thanks,

On Wed, Mar 30, 2016 at 5:18 PM, Tomasz 'Zen' Napierala <
tnapier...@mirantis.com> wrote:

> Hi,
>
> Do we have any requirements for the new tool? Do we know what we don’t
> like about current implementation, what should be avoided, etc.? Before
> that we can only speculate.
> From my ops experience, shotgun like tools will not work conveniently on
> medium to big environments. Even on medium env amount of logs is just too
> huge to handle by such simple tool. In such environments better pattern is
> to use dedicated log collection / analysis tool, just like StackLight.
> At the other hand I’m not sure if ansible is the right tool for that. It
> has some features (like ‘fetch’ command) but in general it’s a
> configuration management tool, and I’m not sure how it would act under such
> heavy load.
>
> Regards,
>
> > On 30 Mar 2016, at 15:20, Vladimir Kozhukalov 
> wrote:
> >
> > ​Igor,
> >
> > I can not agree more. Wherever possible we should
> > use existent mature solutions. Ansible is really
> > convenient and well known solution, let's try to
> > use it.
> >
> > Yet another thing should be taken into account.
> > One of Shotgun features is diagnostic report
> > that could then be attached to bugs to identify
> > the content of env. This report could also be
> > used to reproduce env and then fight a bug.
> > I'd like we to have this kind of report.
> > Is it possible to implement such a feature
> > using Ansible? If yes, then let's switch to Ansible
> > as soon as possible.
> >
> > ​
> >
> > Vladimir Kozhukalov
> >
> > On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky 
> wrote:
> > Neil Jerram wrote:
> > > But isn't Ansible also over-complicated for just running commands over
> SSH?
> >
> > It may be not so "simple" to ignore that. Ansible has a lot of modules
> > which might be very helpful. For instance, Shotgun makes a database
> > dump and there're Ansible modules with the same functionality [1].
> >
> > Don't think I advocate Ansible as a replacement. My point is, let's
> > think about reusing ready solutions. :)
> >
> > - igor
> >
> >
> > [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
> >
> > On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram 
> wrote:
> > >
> > > FWIW, as a naive bystander:
> > >
> > > On 30/03/16 11:06, Igor Kalnitsky wrote:
> > >> Hey Fuelers,
> > >>
> > >> I know that you probably wouldn't like to hear that, but in my opinion
> > >> Fuel has to stop using Shotgun. It's nothing more but a command runner
> > >> over SSH. Besides, it has well known issues such as retrieving remote
> > >> directories with broken symlinks inside.
> > >
> > > It makes sense to me that a command runner over SSH might not need to
> be
> > > a whole Fuel-specific component.
> > >
> > >> So I propose to find a modern alternative and reuse it. If we stop
> > >> supporting Shotgun, we can spend extra time to focus on more important
> > >> things.
> > >>
> > >> As an example, we can consider to use Ansible. It should not be tricky
> > >> to generate Ansible playbook instead of generating Shotgun one.
> > >> Ansible is a  well known tool for devops and cloud operators, and they
> > >> we will only benefit if we provide possibility to extend diagnostic
> > >> recipes in usual (for them) way. What do you think?
> > >
> > > But isn't Ansible also over-complicated for just running commands over
> SSH?
> > >
> > > Neil
> > >
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 

Re: [openstack-dev] [fuel] Component Leads Elections

2016-03-31 Thread Evgeniy L
Hi,

I'm not sure if it's a right place to continue this discussion, but if
there are doubts that such role is needed, we should not wait for another
half a year to drop it.

Also I'm not sure if a single engineer (or two engineers) can handle
majority of upcoming patches + specs + meetings around features. Sergii and
Igor put a lot of efforts to make it work, but does it really scale?

I think it would be better to offload more responsibilities to core groups,
and if core team (of specific project) wants to see formal or informal
leader, let them decide.

I would be really interested to see feedback from current component leads.

Thanks,


On Wed, Mar 30, 2016 at 2:20 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dmitry,
>
> "No need to rush" does not mean we should postpone
> team structure changes until Ocata. IMO, CL role
> (when it is exposed to Fuel) contradicts to our
> modularization activities. Fuel should be an aggregator
> of components. What if we decide to use Ironic or
> Neutron as Fuel components? Should we chose also
> Ironic CL? NO! Ironic is an independent
> project with its own PTL.
>
> I agree with Mike that we could remove this CL
> role in a month if have consensus. But does it
> make any sense to chose CLs now and then
> immediately remove this role? Probably, it is better
> to make a decision right now. I'd really like to
> see here in this ML thread opinions of our current
> CLs and other people.
>
>
>
> Vladimir Kozhukalov
>
> On Tue, Mar 29, 2016 at 11:21 PM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> On Tue, Mar 29, 2016 at 03:19:27PM +0300, Vladimir Kozhukalov wrote:
>> > > I think this call is too late to change a structure for now. I suggest
>> > > that we always respect the policy we've accepted, and follow it.
>> > >
>> > > If Component Leads role is under a question, then I'd continue the
>> > > discussion, hear opinion of current component leads, and give this a
>> time
>> > > to be discussed. I'd have nothing against removing this role in a
>> month
>> > > from now if we reach a consensus on this topic - no need to wait for
>> the
>> > > cycle end.
>> >
>> > Sure, there is no need to rush. I'd also like to see current CL
>> opinions.
>>
>> Considering that, while there's an ongoing discussion on how to change
>> Fuel team structure for Ocata, there's also an apparent consensus that
>> we still want to have component leads for Newton, I'd like to call once
>> again for volunteers to self-nominate for component leads of
>> fuel-library, fuel-web, and fuel-ui. We've got 2 days left until
>> nomination period is over, and no volunteer so far :(
>>
>> --
>> Dmitry Borodaenko
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] New version of fuel-devops (2.9.20)

2016-03-31 Thread Roman Prykhodchenko
I’ve just tried to look up for fuel-devops on PyPi and found nothing. Let’s set 
up everything to let OpenStack CI release this package to PyPi and start 
publish releases there.

Also it’s better to use openstack-announce to announce new releases.

- romcheg



> 31 бер. 2016 р. о 11:52 Dennis Dmitriev  написав(ла):
> 
> Hi All,
> 
> We are going to update the 'fuel-devops' framework on our product CI to
> the version 2.9.20.
> 
> Changes since 2.9.17:
> 
> * Fixes:
> 
> - Fixes related to time synchronization issues: [1], [2], [10]
> - Fix for 'dos.py create' CLI command [4]
> - Use 0644 access mode for libvirt volumes created by fuel-devops [8]
> - Do not raise an exception while removing a libvirt object, if the
> object is missing in libvirt. Just skip it. [9]
> - Add timeout parameter to the tcp_ping method [7]
> 
> * Features:
> 
> - Allow to pass custom SSH credentials for slave nodes in environment
> variables: ENV_SLAVE_LOGIN and ENV_SLAVE_PASSWORD [3]
> - Get unique bridges/interfaces prefixes for each environment on the
> host using a 3-digits hash calculated from ENV_NAME and DATABASES
> objects as a salt for names of libvirt bridges and qemu network
> interfaces [5]
> - Extend Interface and Network models with methods for blocking traffic
> (for failover/destructive tests) [6]
> - Emulate multipath disk devices for libvirt VMs. Environment variable
> SLAVE_MULTIPATH_DISKS_COUNT is a multiplier for 'system' and 'cinder'
> volume names for slave nodes. 'multipath_count' key can be used for
> volumes in YAML template for more flexible configuration [11]
> - Enable NUMA for libvirt VMs [12]. To use NUMA, the following
> environment variables should be specified:
>  $ export NUMA_NODES=2  # amount of NUMA nodes on each VM (including
> Fuel master)
>  $ export DRIVER_ENABLE_ACPI=true  # required for NUMA
>  $ export IFACE_0=ens3  # can be required for fuel-qa system tests
> because of enabled ACPI
>  ...
>  $ export IFACE_5=ens8
> 
> 
> All changes since 2.9.17: [13]
> 
> [1] https://review.openstack.org/#/c/272200/
> [2] https://review.openstack.org/#/c/277900/
> [3] https://review.openstack.org/#/c/281262/
> [4] https://review.openstack.org/#/c/282007/
> [5] https://review.openstack.org/#/c/282732/
> [6] https://review.openstack.org/#/c/275134/
> [7] https://review.openstack.org/#/c/284422/
> [8] https://review.openstack.org/#/c/285241/
> [9] https://review.openstack.org/#/c/294143/
> [10] https://review.openstack.org/#/c/294002/
> [11] https://review.openstack.org/#/c/286804/
> [12] https://review.openstack.org/#/c/292352/
> 
> [13] https://github.com/openstack/fuel-devops/compare/2.9.17...2.9.20
> 
> --
> Regards,
> Dennis Dmitriev
> QA Engineer,
> Mirantis Inc. http://www.mirantis.com
> e-mail/jabber: ddmitr...@mirantis.com
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [RDO] Volunteers needed

2016-03-31 Thread Vladimir Kozhukalov
Aleksandra,

You are right, we need to split this task into several
smaller work items. For the start, I have created BP [1].
Let's discuss this on Fuel IRC meeting [2] today. I have
put this topic to the agenda.

And this is a great idea to put this task into the list
for interns. I'll certainly do this.

[1] https://blueprints.launchpad.net/fuel/+spec/deploy-rdo-using-fuel
[2] https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda


Vladimir Kozhukalov

On Wed, Mar 30, 2016 at 8:41 PM, Aleksandra Fedorova  wrote:

> Hi, Vladimir,
>
> this is a great feature, which can make Fuel truly universal. But i
> think it is hard to fully commit to the whole thing at once,
> especially for new contributors.
>
> Let's start with splitting it into some observable chunks of work, in
> a form of wiki page, blueprint or spec. This way it will become
> visible - where to start and how much effort it would take.
>
> Also, do you think it fits into Internship ideas [1] ?
>
> [1] https://wiki.openstack.org/wiki/Internship_ideas
>
>
> On Tue, Mar 29, 2016 at 3:48 PM, Vladimir Kozhukalov
>  wrote:
> > Dear all,
> >
> > Fuel currently supports deployment of OpenStack using DEB packages
> > (particularly Ubuntu, and Debian in near future). But we also used to
> deploy
> > OpenStack on CentOS, but at some point we switched our focus on Ubuntu.
> It
> > is not so hard to implement deployment of RDO using Fuel. Volunteers are
> > welcome. You can contact Fuel team here in [openstack-dev] maling list
> or in
> > #fuel IRC channel. It would be nice to see more people from different
> > backgrounds contributing to Fuel.
> >
> >
> > Vladimir Kozhukalov
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Aleksandra Fedorova
> CI Team Lead
> bookwar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Timeout of deployment is exceeded & Fuel does not autodetect my hardware.

2016-03-31 Thread Sergii Golovatiuk
Hi Samer,


On Thu, Mar 31, 2016 at 10:18 AM, Samer Machara <
samer.mach...@telecom-sudparis.eu> wrote:

> Bonjour, Hello.
>
> I'm trying to deploy the basic 3 nodes architecture to learn OpenStack:
> 1 controller  and 2 compute nodes. After several hours of deployment, I got
> this error 'Timeout of deployment is exceeded.'. So I tried to redeploy
> it but without success.
> I have installed Fuel 7.0 with launch_8GB.sh script and I have 130GB
> HHD-free and my Download speed is: 6.41 Mbit/s. It should work well.
>

To be able to help I am asking to unveil more details about your setup. The
most easiest way is to generate 'Diagnostic Snapshot' [1] and publish it on
some file hosting. The developers will be able to analyse the logs.

However, if it's a problem, I can give an advise how to start debugging by
yourself.
1. Start looking at the log of orchestrator to see what task failed and
why. Just look through
 /var/log/docker-logs/astute/astute.log
on master node.

2. If it's puppet task, see the puppet.log of failed node to see what
exactly happened
/var/log/docker-logs/remote/node-XXX.domain.tld/puppet-apply.log

That will give you more clarity to understand what has happened.

[1]
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Here_is_how_you_file_a_bug


>

> To see what is happening, I'm tried to deploy one node at the time I
> start with the controller node. However, I still with the same problem
> 'Timeout of deployment is exceeded.'
>
> Another thing, I remove a node by error and now Fuel does not autodetect
> it. Even I tried to add another node by cloning a VM and changing its MAC
> but Fuel does not detect it as well. All VMs are booting from the PXE.
> There is a command to execute the autodetection?
>
> Here is the astute.log
>
>
>  Please help me to see what is going on
> Thanks in advance
>Samer.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [RDO] Volunteers needed

2016-03-30 Thread Aleksandra Fedorova
Hi, Vladimir,

this is a great feature, which can make Fuel truly universal. But i
think it is hard to fully commit to the whole thing at once,
especially for new contributors.

Let's start with splitting it into some observable chunks of work, in
a form of wiki page, blueprint or spec. This way it will become
visible - where to start and how much effort it would take.

Also, do you think it fits into Internship ideas [1] ?

[1] https://wiki.openstack.org/wiki/Internship_ideas


On Tue, Mar 29, 2016 at 3:48 PM, Vladimir Kozhukalov
 wrote:
> Dear all,
>
> Fuel currently supports deployment of OpenStack using DEB packages
> (particularly Ubuntu, and Debian in near future). But we also used to deploy
> OpenStack on CentOS, but at some point we switched our focus on Ubuntu. It
> is not so hard to implement deployment of RDO using Fuel. Volunteers are
> welcome. You can contact Fuel team here in [openstack-dev] maling list or in
> #fuel IRC channel. It would be nice to see more people from different
> backgrounds contributing to Fuel.
>
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aleksandra Fedorova
CI Team Lead
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Extra red tape for filing bugs

2016-03-30 Thread Roman Prykhodchenko
We also often use bugtracker as a TODO tracker. This template does not work for 
TODOs at all. I understand that it’s not technically mandatory to follow it, 
but if that Fuel Bug Checker is going to spam on every single TODO, our inboxes 
will overflow.

> 30 бер. 2016 р. о 17:37 Roman Prykhodchenko  написав(ла):
> 
> Guys,
> 
> I’m not trying to be a foreteller but with a bug template this huge and 
> complicated people will either not follow it or track bugs somewhere else. 
> Perhaps we should make it simpler?
> 
> Detailed bug description:
> 
> Steps to reproduce:
> 
> Expected results:
> 
> Actual result:
> 
> Reproducibility:
> 
> Workaround:
> 
> Impact:
> 
> Description of the environment:
> Operation system: 
> Versions of components: 
> Reference architecture: 
> Network model: 
> Related projects installed: 
> Additional information:
> 
> 
> 
> - romcheg



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   8   9   10   >