Re: [openstack-dev] [Ceilometer]:Duplicate messages with Ceilometer Kafka Publisher.

2016-08-11 Thread Simon Pasquier
Hi
Which version of Kafka do you use?
BR
Simon

On Thu, Aug 11, 2016 at 10:13 AM, Raghunath D  wrote:

> Hi ,
>
>   We are injecting events to our custom plugin in ceilometer.
>   The ceilometer pipeline.yaml  is configured to publish these events over
> kafka and udp, consuming these samples using kafka and udp clients.
>
> *  KAFKA publisher:*
> *  ---*
>   When the events are sent continously ,we can see duplicate msg's are
> recevied in kafka client.
>   From the log it seems ceilometer kafka publisher is failed to send
> msg's,but still these msgs
>   are received by kafka server.So when kafka resends these failed msgs we
> can see duplicate msg's
>   received in kafka client.
>   Please find the attached log for reference.
>   Is this know issue ?
>   Is their any workaround for this issue ?
>
>  * UDP publisher:*
> No duplicate msg's issue seen here and it is working as expected.
>
>
>
> With Best Regards
> Raghunath Dudyala
> Tata Consultancy Services Limited
> Mailto: raghunat...@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty.IT Services
>Business Solutions
>Consulting
> 
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-05-25 Thread Simon Pasquier
Hi Adam,
Maybe you want to look into network templates [1]? Although the
documentation is a bit sparse, it allows you to define flexible network
mappings.
BR,
Simon
[1]
https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates

On Wed, May 25, 2016 at 10:26 AM, Adam Heczko  wrote:

> Thanks Alex, will experiment with it once again although AFAIR it doesn't
> solve thing I'd like to do.
> I'll come later to you in case of any questions.
>
>
> On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko  > wrote:
>
>> Hey Adam,
>>
>> in Fuel we have the following option (checkbox) on Network Setting tab:
>>
>> Assign public network to all nodes
>> When disabled, public network will be assigned to controllers only
>>
>> So if you uncheck it (by default it's unchecked) then public network and
>> 'br-ex' will exist on controllers only. Other nodes won't even have
>> "Public" network on node interface configuration UI.
>>
>> Regards,
>> Alex
>>
>> On Wed, May 25, 2016 at 9:43 AM, Adam Heczko 
>> wrote:
>>
>>> Hello Alex,
>>> I have a question about the proposed changes.
>>> Is it possible to introduce new vlan and associated bridge only for
>>> controllers?
>>> I think about DMZ use case and possibility to expose public IPs/VIP and
>>> API endpoints on controllers on a completely separate L2 network (segment
>>> vlan/bridge) not present on any other nodes than controllers.
>>> Thanks.
>>>
>>> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi folks,

 we had to revert those changes [0] since it's impossible to propery
 handle two different netconfig tasks for multi-role nodes. So everything
 stays as it was before - we have single task 'netconfig' to configure
 network for all roles and you don't need to change anything in your
 plugins. Sorry for inconvenience.

 Our current plan for fixing network idempotency is to keep one task but
 change 'cross-depends' parameter to yaql_exp. This will allow us to use
 single 'netconfig' task for all roles but at the same time we'll be able to
 properly order it: netconfig on non-controllers will be executed only
 aftetr 'virtual_ips' task.

 Regards,
 Alex

 [0] https://review.openstack.org/#/c/320530/


 On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> Hi all,
>
> please be aware that now we have two netconfig tasks (in Fuel 9.0+):
>
> - netconfig-controller - executed on controllers only
> - netconfig - executed on all other nodes
>
> puppet manifest is the same, but tasks are different. We had to do
> this [0] in order to fix network idempotency issues [1].
>
> So if you have 'netconfig' requirements in your plugin's tasks, please
> make sure to add 'netconfig-controller' as well, to work properly on
> controllers.
>
> Regards,
> Alex
>
> [0] https://bugs.launchpad.net/fuel/+bug/1541309
> [1]
> https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Adam Heczko
>>> Security Engineer @ Mirantis Inc.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] custom roles need to configure the default gateway

2016-05-20 Thread Simon Pasquier
Hello,
This is a heads-up for the plugin developers because we found this issue
[1] with the StackLight plugins. If your plugin targets MOS 8 and provides
custom roles, you probably want to call the 'configure_default_route' task
otherwise the nodes will use the Fuel node as the default gateway instead
of the virtual router on the management network.
I did a quick test and found out that for example, the detach-database and
detach-rabbitmq plugins are affected by this bug.
Note that AFAICT it applies only if you want to support MOS 8 (and before).
See [2] for the details.
BR,
Simon
[1] https://bugs.launchpad.net/lma-toolchain/+bug/1583994
[2] https://bugs.launchpad.net/fuel/+bug/1541309
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osops-tools-monitoring][monitoring-for-openstack] Code duplication

2016-05-20 Thread Simon Pasquier
Hello,
You can find the rationale in the review [1] importing m.o.f. into o.t.m.
Basically it was asked by the operators community to avoid the sprawl of
repositories.
BR,
Simon
[1] https://review.openstack.org/#/c/248352/

On Fri, May 20, 2016 at 11:08 AM, Martin Magr  wrote:

> Greetings guys,
>
>   there is a duplication of code within openstack/osops-tools-monitoring
> and openstack/monitoring-for-openstack projects.
>
> It seems that m-o-f became part of o-t-m, but the former project wasn't
> deleted. I was just wandering if there is a reason for the duplication (or
> fork, considering the projects have different core group maintaining each)?
>
> I'm assuming that m-f-o is just a leftover, so can you guys tell me what
> was the reason to create one project to rule them all (eg.
> openstack/osops-tools-monitoring) instead keeping the small projects
> instead?
>
> Thanks in advance for answer,
> Martin
>
> --
> Martin Mágr
> Senior Software Engineer
> Red Hat Czech
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-05-18 Thread Simon Pasquier
Hi Matthew,

Thanks for the reply.

On Tue, May 17, 2016 at 5:33 PM, Matthew Mosesohn <mmoses...@mirantis.com>
wrote:

> Hi Simon,
>
> For 8.0 and earlier, I would deploy ElasticSearch before deploy_end
> and LMA collector after post_deploy_start
>
>
Unfortunately this isn't possible because the final bits of the
Elasticsearch configuration need to happen only once all the ES nodes have
reached the cluster.
And I didn't find a way (with MOS8) to run this task during the deployment
phase after both the primary ES and ES groups have been executed.


> For Mitaka and Newton releases, the task graph now skips dependencies
> that are not found for the role being processed. Now this "requires"
> dependency will work that previously errored.
>

Good to know!

Simon


>
> Best Regards,
> Matthew Mosesohn
>
> On Tue, May 17, 2016 at 6:27 PM, Simon Pasquier <spasqu...@mirantis.com>
> wrote:
> > I'm resurrecting this thread because I didn't manage to find a satisfying
> > solution to deal with this issue.
> >
> > First let me provide more context on the use case. The
> Elasticsearch/Kibana
> > and LMA collector plugins need to synchronize their deployment. Without
> too
> > many details, here is the workflow when both plugins are deployed:
> > 1. [Deployment] Install the Elasticsearch/Kibana primary node.
> > 2. [Deployment] Install the other Elasticsearch/Kibana nodes.
> > 3. [Post-Deployment] Configure the Elasticsearch cluster.
> > 4. [Post-Deployment] Install and configure the LMA collector.
> >
> > Task #4 should happen after #3 so we've specified the dependency in
> > deployement_tasks.yaml [0] but when the Elasticsearch/Kibana plugin isn't
> > deployed in the same environment (which is a valid case), it fails [1]
> with:
> >
> > Tasks 'elasticsearch-kibana-configuration, influxdb-configuration' can't
> be
> > in requires|required_for|groups|tasks for [lma-backends] because they
> don't
> > exist in the graph
> >
> > To workaround this restriction, we're using 'upload_nodes_info' as an
> anchor
> > task [2][3] since it is always present in the graph but this isn't really
> > elegant. Any suggestion to improve this?
> >
> > BR,
> > Simon
> >
> > [0]
> >
> https://github.com/openstack/fuel-plugin-lma-collector/blob/fd9337b43b6bdae6012f421e22847a1b0307ead0/deployment_tasks.yaml#L123-L139
> > [1] https://bugs.launchpad.net/lma-toolchain/+bug/1573087
> > [2]
> >
> https://github.com/openstack/fuel-plugin-lma-collector/blob/56ef5c42f4cd719958c4c2ac3fded1b08fe2b90f/deployment_tasks.yaml#L25-L37
> > [3]
> >
> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/4c5736dadf457b693c30e20d1a2679165ae1155a/deployment_tasks.yaml#L156-L173
> >
> > On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky <ikalnit...@mirantis.com
> >
> > wrote:
> >>
> >> Hey folks,
> >>
> >> Simon P. wrote:
> >> > 1. Run task X for plugin A (if installed).
> >> > 2. Run task Y for plugin B (if installed).
> >> > 3. Run task Z for plugin A (if installed).
> >>
> >> Simon, could you please explain do you need this at the first place? I
> >> can imagine this case only if your two plugins are kinda dependent on
> >> each other. In this case, it's better to do what was said by Andrew W.
> >> - set 'Task Y' to require 'Task X' and that requirement will be
> >> satisfied anyway (even if Task X doesn't exist in the graph).
> >>
> >>
> >> Alex S. wrote:
> >> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
> >> > devs could leverage to have tasks executes at specific points in the
> >> > deploy process.
> >>
> >> Yeah, I think that may be useful sometime. However, I'd prefer to
> >> avoid anchor usage as much as possible. There's no guarantees that
> >> other plugin didn't make any destructive actions early, that breaks
> >> you later. Anchors is good way to resolve possible conflicts, but they
> >> aren't bulletproof.
> >>
> >> - igor
> >>
> >> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya <
> bdobre...@mirantis.com>
> >> wrote:
> >> > On 27.01.2016 14:44, Simon Pasquier wrote:
> >> >> Hi,
> >> >>
> >> >> I see that tasks.yaml is going to be deprecated in the future MOS
> >> >> versions [1]. I've got one question regarding the ordering of tasks
> >> >> between different plugins.
> >> >> With tasks.yaml, it wa

Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-05-17 Thread Simon Pasquier
I'm resurrecting this thread because I didn't manage to find a satisfying
solution to deal with this issue.

First let me provide more context on the use case. The Elasticsearch/Kibana
and LMA collector plugins need to synchronize their deployment. Without too
many details, here is the workflow when both plugins are deployed:
1. [Deployment] Install the Elasticsearch/Kibana primary node.
2. [Deployment] Install the other Elasticsearch/Kibana nodes.
3. [Post-Deployment] Configure the Elasticsearch cluster.
4. [Post-Deployment] Install and configure the LMA collector.

Task #4 should happen after #3 so we've specified the dependency in
deployement_tasks.yaml [0] but when the Elasticsearch/Kibana plugin isn't
deployed in the same environment (which is a valid case), it fails [1] with:

Tasks 'elasticsearch-kibana-configuration, influxdb-configuration' can't be
in requires|required_for|groups|tasks for [lma-backends] because they don't
exist in the graph

To workaround this restriction, we're using 'upload_nodes_info' as an
anchor task [2][3] since it is always present in the graph but this isn't
really elegant. Any suggestion to improve this?

BR,
Simon

[0]
https://github.com/openstack/fuel-plugin-lma-collector/blob/fd9337b43b6bdae6012f421e22847a1b0307ead0/deployment_tasks.yaml#L123-L139
[1] https://bugs.launchpad.net/lma-toolchain/+bug/1573087
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/56ef5c42f4cd719958c4c2ac3fded1b08fe2b90f/deployment_tasks.yaml#L25-L37
[3]
https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/4c5736dadf457b693c30e20d1a2679165ae1155a/deployment_tasks.yaml#L156-L173

On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Hey folks,
>
> Simon P. wrote:
> > 1. Run task X for plugin A (if installed).
> > 2. Run task Y for plugin B (if installed).
> > 3. Run task Z for plugin A (if installed).
>
> Simon, could you please explain do you need this at the first place? I
> can imagine this case only if your two plugins are kinda dependent on
> each other. In this case, it's better to do what was said by Andrew W.
> - set 'Task Y' to require 'Task X' and that requirement will be
> satisfied anyway (even if Task X doesn't exist in the graph).
>
>
> Alex S. wrote:
> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
> > devs could leverage to have tasks executes at specific points in the
> > deploy process.
>
> Yeah, I think that may be useful sometime. However, I'd prefer to
> avoid anchor usage as much as possible. There's no guarantees that
> other plugin didn't make any destructive actions early, that breaks
> you later. Anchors is good way to resolve possible conflicts, but they
> aren't bulletproof.
>
> - igor
>
> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya <bdobre...@mirantis.com>
> wrote:
> > On 27.01.2016 14:44, Simon Pasquier wrote:
> >> Hi,
> >>
> >> I see that tasks.yaml is going to be deprecated in the future MOS
> >> versions [1]. I've got one question regarding the ordering of tasks
> >> between different plugins.
> >> With tasks.yaml, it was possible to coordinate the execution of tasks
> >> between plugins without prior knowledge of which plugins were installed
> [2].
> >> For example, lets say we have 2 plugins: A and B. The plugins may or may
> >> not be installed in the same environment and the tasks execution should
> be:
> >> 1. Run task X for plugin A (if installed).
> >> 2. Run task Y for plugin B (if installed).
> >> 3. Run task Z for plugin A (if installed).
> >>
> >> Right now, we can set task priorities like:
> >>
> >> # tasks.yaml for plugin A
> >> - role: ['*']
> >>   stage: post_deployment/1000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_X.pp
> >> puppet_modules: puppet/modules
> >>
> >> - role: ['*']
> >>   stage: post_deployment/3000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Z.pp
> >> puppet_modules: puppet/modules
> >>
> >> # tasks.yaml for plugin B
> >> - role: ['*']
> >>   stage: post_deployment/2000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Y.pp
> >> puppet_modules: puppet/modules
> >>
> >> How would it be handled without tasks.yaml?
> >
> > I created a kinda related bug [0] and submitted a patch [1] to MOS docs
> > [2] to kill some entropy on the topic of tasks schema roles versus
> > groups and using wildcards for basic and custom 

Re: [openstack-dev] [fuel][plugins][lma] Leveraging OpenStack logstash grok filters in StackLight?

2016-05-17 Thread Simon Pasquier
The short answer is no. StackLight is based on Heka for log processing and
parsing. Heka itself uses Lua Parsing Expression Grammars [1].
For now the patterns are maintained in the LMA collector repository [2] but
it's on our to-do list to have it available in a dedicated repo.
One advantage of having Lua-based parsing is that it's fairly easy to unit
test the patterns.
BR,
Simon

[1] http://www.inf.puc-rio.br/~roberto/lpeg/lpeg.html
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/modules/lma_collector/files/plugins/common/patterns.lua

On Tue, May 17, 2016 at 2:23 PM, Bogdan Dobrelya 
wrote:

> Hi.
> Are there plans to align the StackLight (LMA plugin) [0] with that
> recently announced source of Logstash filters [1]? I found no fast info
> if the plugin supports Logstash input log shippers, so I'm just asking
> as well.
>
> Writing grok filters is... hard, I'd had a sad experience [2] with that
> some time ago, and that is not that I'd like to repeat or maintain on my
> own, so writing those is something definitely should be done
> collaboratively :)
>
> [0] https://launchpad.net/lma-toolchain
> [1] https://github.com/openstack-infra/logstash-filters
> [2] https://goo.gl/bG6EwX
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Simon Pasquier
On Thu, May 12, 2016 at 6:13 PM, Alex Schultz <aschu...@mirantis.com> wrote:

>
>
> On Thu, May 12, 2016 at 10:00 AM, Simon Pasquier <spasqu...@mirantis.com>
> wrote:
>
>> First of all, I'm +1 on this. But as Matt says, it needs to take care of
>> the plugins.
>> A few examples I know of are the Zabbix plugin [1] and the LMA collector
>> plugin [2] that modify the HAProxy configuration of the controller nodes.
>> How could they work with your patch?
>>
>
> So you are leveraging the haproxy on the controller for this
> configuration? I thought I had asked in irc about this and was under the
> impression that you're using your own haproxy configuration on a different
> host(s).  I'll have to figure out an alternative to support plugin haproxy
> configurations as with that patch it would just ignore those configurations.
>

For other plugins, we use dedicated HAProxy nodes but not for these 2 (at
least).
I admit that it wasn't a very good idea but at that time, it was "oh
perfect, /etc/haproxy/conf.d is there, let's use it!". We'll try to think
about a solution on our end too.

Simon


>
> Thanks,
> -Alex
>
>
>> Simon
>>
>> [1]
>> https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
>> [2]
>> https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81
>>
>> On Thu, May 12, 2016 at 4:42 PM, Alex Schultz <aschu...@mirantis.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn <
>>> mmoses...@mirantis.com> wrote:
>>>
>>>> Hi Alex,
>>>>
>>>> Collapsing our haproxy tasks makes it a bit trickier for plugin
>>>> developers. We would still be able to control it via hiera, but it
>>>> means more effort for a plugin developer to run haproxy for a given
>>>> set of services, but explicitly exclude all those it doesn't intend to
>>>> run on a custom role. Maybe you can think of some intermediate step
>>>> that wouldn't add a burden to a plugin developer that would want to
>>>> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>>>>
>>>>
>>> So none of the existing logic has changed around the enabling/disabling
>>> of those tasks within hiera.  The logic remains the same as I'm just
>>> including the osnailyfacter::openstack_haproxy::openstack_haproxy_*
>>> classes[0] within the haproxy task.  The only difference is that the task
>>> logic no longer would control if something was included like sahara.
>>>
>>> -Alex
>>>
>>> [0]
>>> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>>>
>>>
>>>> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz <aschu...@mirantis.com>
>>>> wrote:
>>>> > Hey Fuelers,
>>>> >
>>>> > We have been using our own fork of the haproxy module within
>>>> fuel-library
>>>> > for some time. This also includes relying on a MOS specific version of
>>>> > haproxy that carries the conf.d hack.  Unfortunately this has meant
>>>> that
>>>> > we've needed to leverage the MOS version of this package when
>>>> deploying with
>>>> > UCA.  As far as I can tell, there is no actual need to continue to do
>>>> this
>>>> > anymore. I have been working on switching to the upstream haproxy
>>>> module[0]
>>>> > so we can drop this custom haproxy package and leverage the upstream
>>>> haproxy
>>>> > module.
>>>> >
>>>> > In order to properly switch to the upstream haproxy module, we need to
>>>> > collapse the haproxy tasks into a single task. With the migration to
>>>> > leveraging classes for task functionality, this is pretty straight
>>>> forward.
>>>> > In my review I have left the old tasks still in place to make sure to
>>>> not
>>>> > break any previous dependencies but they old tasks no longer do
>>>> anything.
>>>> > The next step after this initial merge would be to cleanup the
>>>> haproxy code
>>>> > and extract it from the old openstack module.
>>>> >
>>>> > Please be aware that if you were relyin

Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Simon Pasquier
First of all, I'm +1 on this. But as Matt says, it needs to take care of
the plugins.
A few examples I know of are the Zabbix plugin [1] and the LMA collector
plugin [2] that modify the HAProxy configuration of the controller nodes.
How could they work with your patch?
Simon

[1]
https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81

On Thu, May 12, 2016 at 4:42 PM, Alex Schultz  wrote:

>
>
> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn 
> wrote:
>
>> Hi Alex,
>>
>> Collapsing our haproxy tasks makes it a bit trickier for plugin
>> developers. We would still be able to control it via hiera, but it
>> means more effort for a plugin developer to run haproxy for a given
>> set of services, but explicitly exclude all those it doesn't intend to
>> run on a custom role. Maybe you can think of some intermediate step
>> that wouldn't add a burden to a plugin developer that would want to
>> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>>
>>
> So none of the existing logic has changed around the enabling/disabling of
> those tasks within hiera.  The logic remains the same as I'm just including
> the osnailyfacter::openstack_haproxy::openstack_haproxy_* classes[0] within
> the haproxy task.  The only difference is that the task logic no longer
> would control if something was included like sahara.
>
> -Alex
>
> [0]
> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>
>
>> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
>> wrote:
>> > Hey Fuelers,
>> >
>> > We have been using our own fork of the haproxy module within
>> fuel-library
>> > for some time. This also includes relying on a MOS specific version of
>> > haproxy that carries the conf.d hack.  Unfortunately this has meant that
>> > we've needed to leverage the MOS version of this package when deploying
>> with
>> > UCA.  As far as I can tell, there is no actual need to continue to do
>> this
>> > anymore. I have been working on switching to the upstream haproxy
>> module[0]
>> > so we can drop this custom haproxy package and leverage the upstream
>> haproxy
>> > module.
>> >
>> > In order to properly switch to the upstream haproxy module, we need to
>> > collapse the haproxy tasks into a single task. With the migration to
>> > leveraging classes for task functionality, this is pretty straight
>> forward.
>> > In my review I have left the old tasks still in place to make sure to
>> not
>> > break any previous dependencies but they old tasks no longer do
>> anything.
>> > The next step after this initial merge would be to cleanup the haproxy
>> code
>> > and extract it from the old openstack module.
>> >
>> > Please be aware that if you were relying on the conf.d method of
>> injecting
>> > configurations for haproxy, this will break you. Please speak up now so
>> we
>> > can figure out an alternative solution.
>> >
>> > Thanks,
>> > -Alex
>> >
>> >
>> > [0] https://review.openstack.org/#/c/307538/
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-22 Thread Simon Pasquier
Thanks Ilya! We're testing and will be reporting back on monday.
Simon

On Fri, Apr 22, 2016 at 4:53 PM, Ilya Kutukov  wrote:

> Hello!
>
> I think your problem is related to the:
> https://bugs.launchpad.net/fuel/+bug/1570846
>
> Fix to stable/mitaka was commited 20/04/2016
> https://review.openstack.org/#/c/307658/
>
> Could you, please, try to apply this patch and reply does it help or not.
>
> On Fri, Apr 22, 2016 at 5:40 PM, Guillaume Thouvenin 
> wrote:
>
>> Hello,
>>
>> deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
>> task definition:
>>
>> - id: lma-aggregator
>>   type: puppet
>>   version: 2.0.0
>>   requires: [lma-base]
>>   required_for: [post_deployment_end]
>>   role: '*'
>>   parameters:
>> puppet_manifest: puppet/manifests/aggregator.pp
>> puppet_modules: puppet/modules:/etc/puppet/modules
>> timeout: 600
>>
>> It works well with MOS 8. Unfortunately it doesn't work anymore with MOS
>> 9: the task doesn't appear in the deployment graph. The regression seems to
>> be introduced by the computable-task-fields-yaql feature [1].
>>
>> We could use "roles: ['/.*/']" instead of "role: '*' " but then the task
>> is skipped when using MOS 8. We also tried to declare both "roles" and
>> "role" but again this doesn't work.
>>
>> How can we ensure that the same version of the plugin can be deployed on
>> both versions of MOS? Obviously maintaining one Git branch per MOS release
>> is not an option.
>>
>> [1] https://review.openstack.org/#/c/296414/
>>
>> Regards,
>> Guillaume
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] VIP addresses and network templates

2016-04-20 Thread Simon Pasquier
Many thanks Alexey! That's exactly the information I needed.
Simon

On Wed, Apr 20, 2016 at 1:19 PM, Aleksey Kasatkin <akasat...@mirantis.com>
wrote:

> Hi Simon,
>
> When network template is in use, network roles to endpoints mapping is
> specified in section "roles" (in the template). So, "default_mapping"
> from network role description is overridden in the network template.
> E.g.:
>
> network_assignments:
> monitoring:
> ep: br-mon
> ...
>
> network_scheme:
> custom:
> roles:
> influxdb_vip: br-mon
> ...
> ...
>
>
> I hope, this helps.
>
> Regards,
>
>
>
> Aleksey Kasatkin
>
>
> On Wed, Apr 20, 2016 at 12:16 PM, Simon Pasquier <spasqu...@mirantis.com>
> wrote:
>
>> Hi,
>> I've got a question regarding network templates and VIP. Some of our
>> users want to run the StackLight services (eg Elasticsearch/Kibana and
>> InfluxDB/Grafana servers) on a dedicated network (lets call it
>> 'monitoring'). People use network templates [0] to provision this
>> additional network but how can Nailgun allocate the VIP address(es) from
>> this 'monitoring' network knowing that today the plugins specify the
>> 'management' network [1][2]?
>> Thanks for your help,
>> Simon
>> [0]
>> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
>> [1]
>> https://github.com/openstack/fuel-plugin-influxdb-grafana/blob/8976c4869ea5ec464e5d19b387c1a7309bed33f4/network_roles.yaml#L4
>> [2]
>> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/25b79aff9a79d106fc74b33535952d28b0093afb/network_roles.yaml#L2
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][plugins] VIP addresses and network templates

2016-04-20 Thread Simon Pasquier
Hi,
I've got a question regarding network templates and VIP. Some of our users
want to run the StackLight services (eg Elasticsearch/Kibana and
InfluxDB/Grafana servers) on a dedicated network (lets call it
'monitoring'). People use network templates [0] to provision this
additional network but how can Nailgun allocate the VIP address(es) from
this 'monitoring' network knowing that today the plugins specify the
'management' network [1][2]?
Thanks for your help,
Simon
[0]
https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
[1]
https://github.com/openstack/fuel-plugin-influxdb-grafana/blob/8976c4869ea5ec464e5d19b387c1a7309bed33f4/network_roles.yaml#L4
[2]
https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/25b79aff9a79d106fc74b33535952d28b0093afb/network_roles.yaml#L2
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Compatibility of fuel plugins and fuel versions

2016-03-11 Thread Simon Pasquier
Thanks for kicking off the discussion!

On Thu, Mar 10, 2016 at 8:30 AM, Mike Scherbakov 
wrote:

> Hi folks,
> in order to make a decision whether we need to support example plugins,
> and if actually need them [1], I'd suggest to discuss more common things
> about plugins.
>
> My thoughts:
> 1) This is not good, that our plugins created for Fuel 8 won't even
> install on Fuel 9. By default, we should assume that plugin will work at
> newer version of Fuel. However, for proper user experience, I suggest to
> create meta-field "validated_against", where plugin dev would provide
> versions of Fuel this plugin has been tested with. Let's say, it was tested
> against 7.0, 8.0. If user installs plugin in Fuel 9, I'd suggest to show a
> warning saying about risks and the fact that the plugin has not been tested
> against 9. We should not restrict intsallation against 9, though.
>

>From a plugin developer's standpoint, this point doesn't worry me too much.
It's fairly easy to hack the metadata.yaml file for supporting a newer
release of Fuel and I suspect that some users already do this.
And I think that it is good that plugin developers explicitly advertise
which Fuel versions the plugin supports.
That being said, I get the need to have something more automatic for CI and
QA purposes. What about having some kind of flag/option (in the Nailgun
API?) that would allow the installation of a plugin even if it is marked as
not compatible with the current release?



>
> 2) We need to keep backward compatibility of pluggable interface for a few
> releases. So that plugin developer can use pluggable interface of version
> x, which was supported in Fuel 6.1. If we still support it, it would mean
> (see next point) compatibility of this plugin with 6.1, 7.0, 8.0, 9.0. If
> we want to deprecate pluggable interface version, we should announce it,
> and basically follow standard process of deprecation.
>

+1 and more.
>From my past experience, this is a major issue that complicates the plugin
maintenance. I understand that it is sometimes necessary to make breaking
changes but at least it should be advertised in advance and to a wide
audience. Not all plugin developers monitor the Fuel reviews to track these
changes...


>
> 3) Plugin's ability to work against multiple releases of Fuel
> (multi-release support). If if..else clauses to support multiple releases
> are fairly minimal, let's say take less that 10% of LOC, I'd suggest to
> have this supported. Just because it will be easier for plugin devs to
> support their plugin code (no code duplication, single repo for multiple
> releases).
>

>From my experience (and assuming that framework compatibility isn't
broken), this is usually what happens. You need a few if clauses to deal
with the differences between releases N and N+1 but this is manageable.


>
> Thoughts?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088211.html
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing logs from Fuel Web UI and Nailgun

2016-03-11 Thread Simon Pasquier
Hello Roman,

On Fri, Mar 11, 2016 at 9:57 AM, Roman Prykhodchenko  wrote:

> Fuelers,
>
> I remember we’ve discussing this topic in our couloirs before but I’d like
> to bring that discussion to a more official format.
>
> Let me state a few reasons to do this:
>
> - Log management code in Nailgun is overcomplicated
> - Working with logs on big scale deployments is barely possible given the
> current representation
> - Due to overcomplexity and ineffectiveness of the code we always get
> recurring bugs like [1]. That eats tons of time to resolve.
> - There are much better specialized tools, say Logstash [2], that can deal
> with logs much more effectively.
>
>
> There may be more reasons bus I think even the already mentioned ones are
> enough to think about the following proposal:
>
> - Remove Logs tab from Fuel Web UI
> - Remove logs support from Nailgun
> - Create mechanism that allows to configure different log management
> software, say Logstash, Loggly, etc

- Choose a default software to install and provide a plugin for it from the
> box
>

This is what the LMA/StackLight plugins [1][2] are meant for. No need to
develop anything new.

And I'm +1 with the removal of log management from Fuel. As you said, it
can't scale...

[1] http://fuel-plugin-lma-collector.readthedocs.org/en/latest/
[2] http://fuel-plugin-elasticsearch-kibana.readthedocs.org/en/latest/



>
>
> References
> 1.  https://bugs.launchpad.net/fuel/+bug/1553170
> 2. https://www.elastic.co/products/logstash
>
>
> - romcheg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins] Should we maintain example plugins?

2016-03-07 Thread Simon Pasquier
Yet another example [1] of why a dummy/example plugin should be integrated
in the Fuel CI process: the current version of Fuel is broken for (almost)
all plugins since a week at least and no one noticed.
Regards,
Simon

[1] https://bugs.launchpad.net/fuel/+bug/1554095

On Mon, Mar 7, 2016 at 3:16 PM, Simon Pasquier <spasqu...@mirantis.com>
wrote:

> What about maintaining a dummy plugin (eg running only one or two very
> simple tasks) as a standalone project for the purpose of QA?
> IMO it would make more sense than having those example plugins in the
> fuel-plugins project...
> Regards,
> Simon
>
> On Mon, Mar 7, 2016 at 2:49 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
> wrote:
>
>> > and really lowering barriers for people who just begin create plugins.
>>
>> Nonsense. First, people usually create them via running `fpb --create
>> plugin-name` that generates plugin boilerplate. And that boilerplate
>> won't contain that changes.
>>
>> Second, if people ain't smart enough to change few lines in
>> `metadata.yaml` of generated boilerplate to make it work with latest
>> Fuel, maybe it's better for them to do not develop plugins at all?
>>
>> On Fri, Mar 4, 2016 at 2:24 PM, Stanislaw Bogatkin
>> <sbogat...@mirantis.com> wrote:
>> > +1 to maintain example plugins. It is easy enough and really lowering
>> > barriers for people who just begin create plugins.
>> >
>> > On Fri, Mar 4, 2016 at 2:08 PM, Matthew Mosesohn <
>> mmoses...@mirantis.com>
>> > wrote:
>> >>
>> >> Igor,
>> >>
>> >> It seems you are proposing an IKEA approach to plugins. Take Fuel's
>> >> example plugin, add in the current Fuel release, and then build it. We
>> >> maintained these plugins in the past, but now it should a manual step
>> >> to test it out on the current release.
>> >>
>> >> What would be a more ideal situation that meets the needs of users and
>> >> QA? Right now we have failed tests until we can decide on a solution
>> >> that works for everybody.
>> >>
>> >> On Fri, Mar 4, 2016 at 1:26 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> >> wrote:
>> >> > No, this is a wrong road to go.
>> >> >
>> >> > What if in Fuel 10 we drop v1 plugins support? What should we do?
>> >> > Remove v1 example from source tree? That doesn't seem good to me.
>> >> >
>> >> > Example plugins are only examples. The list of supported releases
>> must
>> >> > be maintained on system test side, and system tests must inject that
>> >> > information into plugin's metadata.yaml and test it.
>> >> >
>> >> > Again, I don't say we shouldn't test plugins. I say, tests should be
>> >> > responsible for preparing plugins. I can say even more: tests should
>> >> > not rely on what is produced by plugins, since it's something that
>> >> > could be changed and tests start failing.
>> >> >
>> >> > On Thu, Mar 3, 2016 at 7:54 PM, Swann Croiset <scroi...@mirantis.com
>> >
>> >> > wrote:
>> >> >> IMHO it is important to keep plugin examples and keep testing them,
>> >> >> very
>> >> >> valuable for plugin developers.
>> >> >>
>> >> >> For example, I've encountered [0] the case where "plugin as role"
>> >> >> feature
>> >> >> wasn't easily testable with fuel-qa because not compliant with the
>> last
>> >> >> plugin data structure,
>> >> >> and more recently we've spotted a regression [1] with
>> "vip-reservation"
>> >> >> feature introduced by a change in nailgun.
>> >> >> These kind of issues are time consuming for plugin developers and
>> >> >> can/must
>> >> >> be avoided by testing them.
>> >> >>
>> >> >> I don't even understand why the question is raised while fuel
>> plugins
>> >> >> are
>> >> >> supposed to be supported and more and more used [3], even by murano
>> [4]
>> >> >> ...
>> >> >>
>> >> >> [0] https://bugs.launchpad.net/fuel/+bug/1543962
>> >> >> [1] https://bugs.launchpad.net/fuel/+bug/1551320
>> >> >> [3]
>> >> >>
>> >> >>
>> http://lists.opens

Re: [openstack-dev] [fuel][plugins] Should we maintain example plugins?

2016-03-07 Thread Simon Pasquier
What about maintaining a dummy plugin (eg running only one or two very
simple tasks) as a standalone project for the purpose of QA?
IMO it would make more sense than having those example plugins in the
fuel-plugins project...
Regards,
Simon

On Mon, Mar 7, 2016 at 2:49 PM, Igor Kalnitsky 
wrote:

> > and really lowering barriers for people who just begin create plugins.
>
> Nonsense. First, people usually create them via running `fpb --create
> plugin-name` that generates plugin boilerplate. And that boilerplate
> won't contain that changes.
>
> Second, if people ain't smart enough to change few lines in
> `metadata.yaml` of generated boilerplate to make it work with latest
> Fuel, maybe it's better for them to do not develop plugins at all?
>
> On Fri, Mar 4, 2016 at 2:24 PM, Stanislaw Bogatkin
>  wrote:
> > +1 to maintain example plugins. It is easy enough and really lowering
> > barriers for people who just begin create plugins.
> >
> > On Fri, Mar 4, 2016 at 2:08 PM, Matthew Mosesohn  >
> > wrote:
> >>
> >> Igor,
> >>
> >> It seems you are proposing an IKEA approach to plugins. Take Fuel's
> >> example plugin, add in the current Fuel release, and then build it. We
> >> maintained these plugins in the past, but now it should a manual step
> >> to test it out on the current release.
> >>
> >> What would be a more ideal situation that meets the needs of users and
> >> QA? Right now we have failed tests until we can decide on a solution
> >> that works for everybody.
> >>
> >> On Fri, Mar 4, 2016 at 1:26 PM, Igor Kalnitsky  >
> >> wrote:
> >> > No, this is a wrong road to go.
> >> >
> >> > What if in Fuel 10 we drop v1 plugins support? What should we do?
> >> > Remove v1 example from source tree? That doesn't seem good to me.
> >> >
> >> > Example plugins are only examples. The list of supported releases must
> >> > be maintained on system test side, and system tests must inject that
> >> > information into plugin's metadata.yaml and test it.
> >> >
> >> > Again, I don't say we shouldn't test plugins. I say, tests should be
> >> > responsible for preparing plugins. I can say even more: tests should
> >> > not rely on what is produced by plugins, since it's something that
> >> > could be changed and tests start failing.
> >> >
> >> > On Thu, Mar 3, 2016 at 7:54 PM, Swann Croiset 
> >> > wrote:
> >> >> IMHO it is important to keep plugin examples and keep testing them,
> >> >> very
> >> >> valuable for plugin developers.
> >> >>
> >> >> For example, I've encountered [0] the case where "plugin as role"
> >> >> feature
> >> >> wasn't easily testable with fuel-qa because not compliant with the
> last
> >> >> plugin data structure,
> >> >> and more recently we've spotted a regression [1] with
> "vip-reservation"
> >> >> feature introduced by a change in nailgun.
> >> >> These kind of issues are time consuming for plugin developers and
> >> >> can/must
> >> >> be avoided by testing them.
> >> >>
> >> >> I don't even understand why the question is raised while fuel plugins
> >> >> are
> >> >> supposed to be supported and more and more used [3], even by murano
> [4]
> >> >> ...
> >> >>
> >> >> [0] https://bugs.launchpad.net/fuel/+bug/1543962
> >> >> [1] https://bugs.launchpad.net/fuel/+bug/1551320
> >> >> [3]
> >> >>
> >> >>
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/085636.html
> >> >> [4] https://review.openstack.org/#/c/286310/
> >> >>
> >> >> On Thu, Mar 3, 2016 at 3:19 PM, Matthew Mosesohn
> >> >> 
> >> >> wrote:
> >> >>>
> >> >>> Hi Fuelers,
> >> >>>
> >> >>> I would like to bring your attention a dilemma we have here. It
> seems
> >> >>> that there is a dispute as to whether we should maintain the
> releases
> >> >>> list for example plugins[0]. In this case, this is for adding
> version
> >> >>> 9.0 to the list.
> >> >>>
> >> >>> Right now, we run a swarm test that tries to install the example
> >> >>> plugin and do a deployment, but it's failing only for this reason. I
> >> >>> should add that this is the only automated daily test that will
> verify
> >> >>> that our plugin framework actually works. During the Mitaka
> >> >>> development  cycle, we already had an extended period where plugins
> >> >>> were broken[1]. Removing this test (or leaving it permanently red,
> >> >>> which is effectively the same), would raise the risk to any member
> of
> >> >>> the Fuel community who depends on plugins actually working.
> >> >>>
> >> >>> The other impact of abandoning maintenance of example plugins is
> that
> >> >>> it means that a given interested Fuel Plugin developer would not be
> >> >>> able to easily get started with plugin development. It might not be
> >> >>> inherently obvious to add the current Fuel release to the
> >> >>> metadata.yaml file and it would likely discourage such a user. In
> this
> >> >>> case, I would propose that we 

Re: [openstack-dev] [fuel][plugins][lma]

2016-02-29 Thread Simon Pasquier
Hello Shubham,
In the coming version of LMA (0.9), we've already ensured that the the
Puppet modules configuring the different components of the LMA toolchain
can be used without Fuel and we're in the process of documenting all of
them as well as how they can be used outside of Fuel. Eventually it should
be feasible to deploy LMA in a MOS environment without using the plugin
framework although of course the experience will be less smooth that with
the plugins and the Fuel UI. This should be completed in the next few
days/weeks and I'd be very happy to get your feedback on it.
I've got a few questions regarding your deployment:
- Do you want to deploy LMA on top of an environment already deployed by
Fuel or do you want to deploy both OpenStack & LMA without Fuel?
- Which version of LMA and OpenStack do you use?
Thanks in advance,
Simon

On Mon, Feb 29, 2016 at 8:25 AM, Shubham Keyal 
wrote:

> hey all,
>
> can i use this plugin( LMA ) without installing fuel. i mean can i install
> it independently. if yes, then how?.
>
> Shubham Keyal
> Software Engineer - I
> *M*: +91 9003529711,
> 2nd FLOOR, WEST WING,
> SALARPURIA SUPREME, MARATHAHALLI, BENGALURU
> Download Our App
> [image: A]
> 
>  [image:
> A]
> 
>  [image:
> W]
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-11 Thread Simon Pasquier
Hi,

On Thu, Feb 11, 2016 at 11:46 AM, Igor Kalnitsky 
wrote:

> Hey folks,
>
> The original idea is to provide a way to build plugin that are
> compatible with few releases. It makes sense to me, cause it looks
> awful if you need to maintain different branches for different Fuel
> releases and there's no difference in the sources. In that case, each
> bugfix to deployment scripts requires:
>
> * backport bugfix to other branches (N backports)
> * build new packages for supported releases (N builds)
> * release new packages (N releases)
>
> It's somehow.. annoying.
>

A big +1 on Igor's remark. I've already expressed it in another thread but
it should be expected that plugin developers want to support 2 consecutive
versions of Fuel for a given version of their plugin.
That being said, I've never had issues to do it with the current plugin
framework. Except when Fuel breaks the backward compatibility but it's
another story...

Simon


>
> However, I starting agree that having all-in-one RPM when deployment
> scripts are different, tasks are different, roles/volumes are
> different, probably isn't a good idea. It basically means that your
> sources are completely different, and that means you have different
> implementations of the same plugin. In that case, in order to avoid
> mess in source tree, it'd be better to separate such implementations
> on VCS level.
>
> But I'd like to hear more opinion from plugin developers.
>
> - Igor
>
> On Thu, Feb 11, 2016 at 9:16 AM, Bulat Gaifullin
>  wrote:
> > I agree with Stas, one rpm - one version.
> >
> > But plugin builder allows to specify several releases as compatible. The
> > deployment tasks and repositories can be specified per release, at the
> same
> > time the deployment graph is one for all releases.
> > Currently it looks like half-implemented feature.  Can we drop this
> feature?
> > or should we finish implementation of this feature.
> >
> >
> > Regards,
> > Bulat Gaifullin
> > Mirantis Inc.
> >
> >
> >
> > On 11 Feb 2016, at 02:41, Andrew Woodward  wrote:
> >
> >
> >
> > On Wed, Feb 10, 2016 at 2:23 PM Dmitry Borodaenko <
> dborodae...@mirantis.com>
> > wrote:
> >>
> >> +1 to Stas, supplanting VCS branches with code duplication is a path to
> >> madness and despair. The dubious benefits of a cross-release backwards
> >> compatible plugin binary are not worth the code and infra technical debt
> >> that such approach would accrue over time.
> >
> >
> > Supporting multiple fuel releases will likely result in madness as
> > discussed, however as we look to support multiple OpenStack releases from
> > the same version of fuel, this methodology becomes much more important.
> >
> >>
> >> On Wed, Feb 10, 2016 at 07:36:30PM +0300, Stanislaw Bogatkin wrote:
> >> > It changes mostly nothing for case of furious plugin development when
> >> > big
> >> > parts of code changed from one release to another.
> >> >
> >> > You will have 6 different deployment_tasks directories and 30 a little
> >> > bit
> >> > different files in root directory of plugin. Also you forgot about
> >> > repositories directory (+6 at least), pre_build hooks (also 6) and so
> >> > on.
> >> > It will look as hell after just 3 years of development.
> >> >
> >> > Also I can't imagine how to deal with plugin licensing if you have
> >> > Apache
> >> > for liberty but BSD for mitaka release, for example.
> >> >
> >> > Much easier way to develop a plugin is to keep it's source in VCS like
> >> > Git
> >> > and just make a branches for every fuel release. It will give us
> >> > opportunity to not store a bunch of similar but a little bit different
> >> > files in repo. There is no reason to drag all different versions of
> code
> >> > for specific release.
> >> >
> >> >
> >> > On other hand there is a pros - your plugin can survive after upgrade
> if
> >> > it
> >> > supports new release, no changes needed here.
> >> >
> >> > On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov
> >> > 
> >> > wrote:
> >> >
> >> > > Fuelers,
> >> > >
> >> > > We are discussing the idea to extend the multi release packages for
> >> > > plugins.
> >> > >
> >> > > Fuel plugin builder (FPB) can create one rpm-package for all
> supported
> >> > > releases (from metadata.yaml) but we can specify only deployment
> >> > > scripts
> >> > > and repositories per release.
> >> > >
> >> > > Current release definition (in metadata.yaml):
> >> > > - os: ubuntu
> >> > >   version: liberty-8.0
> >> > >   mode: ['ha']
> >> > >   deployment_scripts_path: deployment_scripts/
> >> > >   repository_path: repositories/ubuntu
> >> > >
> >
> >
> > This will result in far too much clutter.
> > For starters we should support nested over rides. for example the author
> may
> > have already taken account for the changes between one openstack version
> to
> > another. In this case they only should need to define the releases they
> > 

[openstack-dev] [Fuel][Plugins] question on the is_hotpluggable feature

2016-02-05 Thread Simon Pasquier
Hi,
I'm testing the ability to install Fuel plugins in a an environment that is
already deployed.
My starting environment is quite simple: 1 controller + 1 compute. After
the initial deployment, I've installed the 4 LMA plugins:
- LMA collector
- Elasticsearch-Kibana [*]
- InfluxDB-Grafana [*]
- Infrastructure Alerting [*]
[*] adds a new role
Of course, all plugins have "is_hotpluggable: true" in their metadata
definition.
My expectation is that I can add a new node with the new roles and that the
LMA collector tasks are executed for all 3 nodes. So I've added the new
node and click the "Deploy changes" button. My re-deployment runs fine but
I notice that the plugins aren't installed on the existing nodes (eg
/etc/fuel/plugins/...) so there is no way that the plugins tasks can be
executed on already deployed nodes... Is this a known limitation? Am I
missing something?
Best regards,
Simon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] question on the is_hotpluggable feature

2016-02-05 Thread Simon Pasquier
Thanks Evgeniy.

On Fri, Feb 5, 2016 at 11:07 AM, Evgeniy L <e...@mirantis.com> wrote:

> Hi Simon,
>
> As far as I know it's expected behaviour (at least for the current
> release), and it's expected that user reruns deployment on required nodes
> using fuel cli, in order to install plugin on a live environment.
>

Ok. For the record, this means running this command for every node that is
already deployed:
$ fuel node --node-id  --deploy

Any plan to have a nicer experience in future Fuel releases?


> It depends on specific role, but "update_required" field may help you, it
> can be added to role description, Fuel reruns deployment on nodes with
> roles, which are specified in the list, if new node with the role is added
> to the environment.
>

Nope, it doesn't work for me since it should run for *all* the nodes,
irrespective of their roles. AFAIK update_required doesn't support '*'.


>
> Thanks,
>
> [1]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L16-L18
>
> On Fri, Feb 5, 2016 at 12:53 PM, Simon Pasquier <spasqu...@mirantis.com>
> wrote:
>
>> Hi,
>> I'm testing the ability to install Fuel plugins in a an environment that
>> is already deployed.
>> My starting environment is quite simple: 1 controller + 1 compute. After
>> the initial deployment, I've installed the 4 LMA plugins:
>> - LMA collector
>> - Elasticsearch-Kibana [*]
>> - InfluxDB-Grafana [*]
>> - Infrastructure Alerting [*]
>> [*] adds a new role
>> Of course, all plugins have "is_hotpluggable: true" in their metadata
>> definition.
>> My expectation is that I can add a new node with the new roles and that
>> the LMA collector tasks are executed for all 3 nodes. So I've added the new
>> node and click the "Deploy changes" button. My re-deployment runs fine but
>> I notice that the plugins aren't installed on the existing nodes (eg
>> /etc/fuel/plugins/...) so there is no way that the plugins tasks can be
>> executed on already deployed nodes... Is this a known limitation? Am I
>> missing something?
>> Best regards,
>> Simon
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] question on the is_hotpluggable feature

2016-02-05 Thread Simon Pasquier
On Fri, Feb 5, 2016 at 1:54 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Simon,
>
> > Nope, it doesn't work for me since it should run for *all* the nodes,
> > irrespective of their roles. AFAIK update_required doesn't support '*'.
>
> If your plugin provides a new node role as well as additional tasks
> for other node roles, you may try to workaround that by using
>
>   reexecute_on: [deploy_changes]
>
> task marker. In that case, the task will be executed each time you hit
> "Deploy Changes" button, so make sure it's idempotent task.
>

Igor, I don't think that it will solve the issue since the plugin code
isn't copied on the already deployed nodes in the first place. Only 'fuel
node --node-id  --deploy' will do it.


>
> - igor
>
>
> On Fri, Feb 5, 2016 at 1:04 PM, Evgeniy L <e...@mirantis.com> wrote:
> > Simon,
> >
> >>> Any plan to have a nicer experience in future Fuel releases?
> >
> > I haven't heard about any plans on improvements for that, but management
> > team should know better whether it's on roadmap or not.
> >
> > Thanks,
> >
> > On Fri, Feb 5, 2016 at 1:52 PM, Simon Pasquier <spasqu...@mirantis.com>
> > wrote:
> >>
> >> Thanks Evgeniy.
> >>
> >> On Fri, Feb 5, 2016 at 11:07 AM, Evgeniy L <e...@mirantis.com> wrote:
> >>>
> >>> Hi Simon,
> >>>
> >>> As far as I know it's expected behaviour (at least for the current
> >>> release), and it's expected that user reruns deployment on required
> nodes
> >>> using fuel cli, in order to install plugin on a live environment.
> >>
> >>
> >> Ok. For the record, this means running this command for every node that
> is
> >> already deployed:
> >> $ fuel node --node-id  --deploy
> >>
> >> Any plan to have a nicer experience in future Fuel releases?
> >>
> >>>
> >>> It depends on specific role, but "update_required" field may help you,
> it
> >>> can be added to role description, Fuel reruns deployment on nodes with
> >>> roles, which are specified in the list, if new node with the role is
> added
> >>> to the environment.
> >>
> >>
> >> Nope, it doesn't work for me since it should run for *all* the nodes,
> >> irrespective of their roles. AFAIK update_required doesn't support '*'.
> >>
> >>>
> >>>
> >>> Thanks,
> >>>
> >>> [1]
> >>>
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L16-L18
> >>>
> >>> On Fri, Feb 5, 2016 at 12:53 PM, Simon Pasquier <
> spasqu...@mirantis.com>
> >>> wrote:
> >>>>
> >>>> Hi,
> >>>> I'm testing the ability to install Fuel plugins in a an environment
> that
> >>>> is already deployed.
> >>>> My starting environment is quite simple: 1 controller + 1 compute.
> After
> >>>> the initial deployment, I've installed the 4 LMA plugins:
> >>>> - LMA collector
> >>>> - Elasticsearch-Kibana [*]
> >>>> - InfluxDB-Grafana [*]
> >>>> - Infrastructure Alerting [*]
> >>>> [*] adds a new role
> >>>> Of course, all plugins have "is_hotpluggable: true" in their metadata
> >>>> definition.
> >>>> My expectation is that I can add a new node with the new roles and
> that
> >>>> the LMA collector tasks are executed for all 3 nodes. So I've added
> the new
> >>>> node and click the "Deploy changes" button. My re-deployment runs
> fine but I
> >>>> notice that the plugins aren't installed on the existing nodes (eg
> >>>> /etc/fuel/plugins/...) so there is no way that the plugins tasks can
> be
> >>>> executed on already deployed nodes... Is this a known limitation? Am I
> >>>> missing something?
> >>>> Best regards,
> >>>> Simon
> >>>>
> >>>>
> >>>>
> >>>>
> __
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>> Unsubscribe:
> >>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>
> >>>

Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-03 Thread Simon Pasquier
On Tue, Feb 2, 2016 at 5:08 PM, Foley, Emma L <emma.l.fo...@intel.com>
wrote:

> Hi Simon,
>
>
>
> So collectd acts as a statsd server, and the metrics are aggregated and
> dispatched to the collectd daemon.
>
> Collectd’s write plugins then output the stats to wherever we want them to
> go.
>
>
>
> In order to interact with gnocchi using statsd, we require collectd to act
> as a statsd client and dispatch the metrics to gnocchi-statsd service.
>

AFAICT there's no such thing out of the box but it should be fairly
straightforward to implement a StatsD writer using the collectd Python
plugin.

Simon

[1] https://collectd.org/documentation/manpages/collectd-python.5.shtml


>
>
> Regards,
>
> Emma
>
>
>
>
>
> *From:* Simon Pasquier [mailto:spasqu...@mirantis.com]
> *Sent:* Monday, February 1, 2016 9:02 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>; Foley, Emma L <emma.l.fo...@intel.com>
> *Subject:* Re: [openstack-dev] [telemetry][ceilometer] New project:
> collectd-ceilometer-plugin
>
>
>
>
>
>
>
> On Fri, Jan 29, 2016 at 6:30 PM, Julien Danjou <jul...@danjou.info> wrote:
>
> On Fri, Jan 29 2016, Foley, Emma L wrote:
>
> > Supporting statsd would require some more investigation, as collectd's
> > statsd plugin supports reading stats from the system, but not writing
> > them.
>
> I'm not sure what that means?
> https://collectd.org/wiki/index.php/Plugin:StatsD seems to indicate it
> can send metrics to a statsd daemon.
>
>
>
> Nope that is the opposite: collectd can act as a statsd server. The man
> page [1] is clearer than the collectd Wiki.
>
> Simon
>
>
> [1]
> https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_statsd
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-01 Thread Simon Pasquier
On Fri, Jan 29, 2016 at 6:30 PM, Julien Danjou  wrote:

> On Fri, Jan 29 2016, Foley, Emma L wrote:
>
> > Supporting statsd would require some more investigation, as collectd's
> > statsd plugin supports reading stats from the system, but not writing
> > them.
>
> I'm not sure what that means?
> https://collectd.org/wiki/index.php/Plugin:StatsD seems to indicate it
> can send metrics to a statsd daemon.
>

Nope that is the opposite: collectd can act as a statsd server. The man
page [1] is clearer than the collectd Wiki.

Simon

[1]
https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_statsd
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-01-29 Thread Simon Pasquier
On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky <ikalnit...@mirantis.com>
wrote:

> Hey folks,
>
> Simon P. wrote:
> > 1. Run task X for plugin A (if installed).
> > 2. Run task Y for plugin B (if installed).
> > 3. Run task Z for plugin A (if installed).
>
> Simon, could you please explain do you need this at the first place? I
> can imagine this case only if your two plugins are kinda dependent on
> each other. In this case, it's better to do what was said by Andrew W.
> - set 'Task Y' to require 'Task X' and that requirement will be
> satisfied anyway (even if Task X doesn't exist in the graph).
>

Indeed, I didn't know that it was supported. I had the (false) impression
that required tasks had to exist in the first place.
If this works then it should be ok for me.


>
>
> Alex S. wrote:
> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
> > devs could leverage to have tasks executes at specific points in the
> > deploy process.
>
> Yeah, I think that may be useful sometime. However, I'd prefer to
> avoid anchor usage as much as possible. There's no guarantees that
> other plugin didn't make any destructive actions early, that breaks
> you later. Anchors is good way to resolve possible conflicts, but they
> aren't bulletproof.
>
> - igor
>
> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya <bdobre...@mirantis.com>
> wrote:
> > On 27.01.2016 14:44, Simon Pasquier wrote:
> >> Hi,
> >>
> >> I see that tasks.yaml is going to be deprecated in the future MOS
> >> versions [1]. I've got one question regarding the ordering of tasks
> >> between different plugins.
> >> With tasks.yaml, it was possible to coordinate the execution of tasks
> >> between plugins without prior knowledge of which plugins were installed
> [2].
> >> For example, lets say we have 2 plugins: A and B. The plugins may or may
> >> not be installed in the same environment and the tasks execution should
> be:
> >> 1. Run task X for plugin A (if installed).
> >> 2. Run task Y for plugin B (if installed).
> >> 3. Run task Z for plugin A (if installed).
> >>
> >> Right now, we can set task priorities like:
> >>
> >> # tasks.yaml for plugin A
> >> - role: ['*']
> >>   stage: post_deployment/1000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_X.pp
> >> puppet_modules: puppet/modules
> >>
> >> - role: ['*']
> >>   stage: post_deployment/3000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Z.pp
> >> puppet_modules: puppet/modules
> >>
> >> # tasks.yaml for plugin B
> >> - role: ['*']
> >>   stage: post_deployment/2000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Y.pp
> >> puppet_modules: puppet/modules
> >>
> >> How would it be handled without tasks.yaml?
> >
> > I created a kinda related bug [0] and submitted a patch [1] to MOS docs
> > [2] to kill some entropy on the topic of tasks schema roles versus
> > groups and using wildcards for basic and custom roles from plugins as
> > well. There is also a fancy picture to clarify things a bit. Would be
> > nice to put more details there about custom stages as well!
> >
> > If plugins are not aware of each other, they cannot be strictly ordered
> > like "to be the very last in the deployment" as one and only shall be
> > so. That is why "coordinating the execution of tasks
> > between plugins without prior knowledge of which plugins were installed"
> > looks very confusing for me. Though, maybe wildcards with the "skipped"
> > task type may help to organize things better?
> >
> > Perhaps the Fuel plugins team could answer the question better.
> >
> > [0] https://bugs.launchpad.net/fuel/+bug/1538982
> > [1] https://review.fuel-infra.org/16509
> > [2]
> >
> https://docs.mirantis.com/openstack/fuel/fuel-7.0/reference-architecture.html#task-based-deployment
> >
> >>
> >> Regards,
> >> Simon
> >>
> >> [1] https://review.openstack.org/#/c/271417/
> >> [2]
> https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-re

Re: [openstack-dev] [fuel][plugins] Detached components plugin update requirement

2016-01-27 Thread Simon Pasquier
I see no follow-up to Swann's question so let me elaborate why this issue
is important for the LMA plugins.

First I need to explain what was our release schedule for the LMA plugins
during the MOS 7.0 cycle:
- New features were done on the master branch which was only compatible
with MOS 7.0.
- We maintained the stable/0.7 branches of the LMA plugins to remain
compatible with both MOS 6.1 and 7.0. The work was very lightweight like
backporting a few fixes from the master branch (for instance the
metadata.yaml update).

This workflow allows several things for us:
- Ship a point release of the LMA toolchain based on the stable(/0.7)
branch soon after MOS (7.0) is released. This let users deploy LMA with MOS
7 without waiting for the new LMA version that's been release a few months
after MOS 7.
- Use a well-know version of the LMA toolchain with the MOS version under
development for troubleshooting, performance analysis, longevity testing,
... This one is of great interest for the QA team. If we were to use the
master branch of the LMA plugins, it would dramatically decrease the
stability of the whole.
- Make sure that the LMA toolchain can be deployed with plugins that don't
support the latest MOS version: for instance, we're going to release our
master branch (compatible only with MOS 8) right after MOS GA but other
plugins won't ship a new version before MOS 9 so we need to keep supporting
MOS 7.

Looking at the originating bug description [1], I'm not sure to fully
understand what problem the change is trying to fix and why it's been
backported on stable/8.0. But IMO, the change puts too much burden on
plugin developers. Maintaining several branches of our plugins for every
MOS version is the last thing I want to do.

Regards,
Simon

[1] https://bugs.launchpad.net/fuel/+bug/1508486

On Thu, Jan 21, 2016 at 10:23 AM, Bartlomiej Piotrowski <
bpiotrow...@mirantis.com> wrote:

> Breakage of anything is probably the last thing I intended to achieve with
> that patch. Maybe I misunderstand how tasks dependencies works, let me
> describe *explicit* dependencies I did in tasks.yaml:
>
> hiera requires deploy_start
> hiera is required for setup_repositories
> setup_repositories is required for fuel_pkgs
> setup_repositories requires hiera
> fuel_pkgs requires setup_repositories
> fuel_pkgs is required globals
>
> Coming from packaging realm, there is clear transitive dependency for
> anything that pulls globals task, i.e. if task foo depends on globals, the
> latter pulls fuel_pkgs, which brings setup_repositories in. I'm in favor of
> reverting both patches (master and stable/8.0) if it's going to break
> backwards compatibility, but I really see bigger problem in the way we
> handle task dependencies.
>
> Bartłomiej
>
> On Thu, Jan 21, 2016 at 9:51 AM, Swann Croiset 
> wrote:
>
>> Sergii,
>> I'm also curious, what about plugins which intend to be compatible with
>> both MOS 7 and MOS 8?
>> I've in mind the LMA plugins stable/0.8
>>
>> BR
>>
>> --
>> Swann
>>
>> On Wed, Jan 20, 2016 at 8:34 PM, Sergii Golovatiuk <
>> sgolovat...@mirantis.com> wrote:
>>
>>> Plugin master branch won't be compatible with older versions. Though the
>>> plugin developer may create stable branch to have compatibility with older
>>> versions.
>>>
>>>
>>> --
>>> Best regards,
>>> Sergii Golovatiuk,
>>> Skype #golserge
>>> IRC #holser
>>>
>>> On Wed, Jan 20, 2016 at 6:41 PM, Dmitry Mescheryakov <
>>> dmescherya...@mirantis.com> wrote:
>>>
 Sergii,

 I am curious - does it mean that the plugins will stop working with
 older versions of Fuel?

 Thanks,

 Dmitry

 2016-01-20 19:58 GMT+03:00 Sergii Golovatiuk 
 :

> Hi,
>
> Recently I merged the change to master and 8.0 that moves one task
> from Nailgun to Library [1]. Actually, it replaces [2] to allow operator
> more flexibility with repository management.  However, it affects the
> detached components as they will require one more task to add as written 
> at
> [3]. Please adapt your plugin accordingly.
>
> [1]
> https://review.openstack.org/#/q/I1b83e3bfaebecdb8455d5697e320f24fb4941536
> [2]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L149-L190
> [3] https://review.openstack.org/#/c/270232/1/deployment_tasks.yaml
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage 

[openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-01-27 Thread Simon Pasquier
Hi,

I see that tasks.yaml is going to be deprecated in the future MOS versions
[1]. I've got one question regarding the ordering of tasks between
different plugins.
With tasks.yaml, it was possible to coordinate the execution of tasks
between plugins without prior knowledge of which plugins were installed [2].
For example, lets say we have 2 plugins: A and B. The plugins may or may
not be installed in the same environment and the tasks execution should be:
1. Run task X for plugin A (if installed).
2. Run task Y for plugin B (if installed).
3. Run task Z for plugin A (if installed).

Right now, we can set task priorities like:

# tasks.yaml for plugin A
- role: ['*']
  stage: post_deployment/1000
  type: puppet
  parameters:
puppet_manifest: puppet/manifests/task_X.pp
puppet_modules: puppet/modules

- role: ['*']
  stage: post_deployment/3000
  type: puppet
  parameters:
puppet_manifest: puppet/manifests/task_Z.pp
puppet_modules: puppet/modules

# tasks.yaml for plugin B
- role: ['*']
  stage: post_deployment/2000
  type: puppet
  parameters:
puppet_manifest: puppet/manifests/task_Y.pp
puppet_modules: puppet/modules

How would it be handled without tasks.yaml?

Regards,
Simon

[1] https://review.openstack.org/#/c/271417/
[2] https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] How do avoid duplicated links in the dashboard?

2016-01-26 Thread Simon Pasquier
Hi all,

In the scope of the LMA plugins, we've played with the new ability to
insert links in the Fuel dashboard. This works fine from the UI standpoint
except that to avoid creating duplicate links we've come up with a solution
that is intricate and brittle IMO.
Basically we have an exec resource that sends a POST request to the Fuel
API and creates a "sentinel" file on the local filesystem if it succeeds
[1]. If the Puppet manifest is re-executed later on, the exec resource
won't be applied again if that file exists.
The problem arises when the node that created the link is re-provisioned or
replaced since it will generate duplicated links eventually.
Have someone find a better way to manage this?

Regards,
Simon

[1]
https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/b85348aa964964f47dad1b08438e2d803ff20544/deployment_scripts/puppet/manifests/provision_services.pp#L38-L43
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Simon Pasquier
My 2 cents on RabbitMQ logging...

On Fri, Jan 15, 2016 at 8:39 AM, Michal Rostecki 
wrote:

I'd suggest to check the similar options in RabbitMQ and other
> non-OpenStack components.
>

AFAICT RabbitMQ can't log to Syslog anyway. But you have option to make
RabbitMQ log to stdout [1].
BR,
Simon.
[1] http://www.superpumpup.com/docker-rabbitmq-stdout
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-12 Thread Simon Pasquier
Hello Alicja,

Comments inline.

On Tue, Jan 12, 2016 at 1:19 PM, Kwasniewska, Alicja <
alicja.kwasniew...@intel.com> wrote:

> Unfortunately I do not have any experience in working or testing Heka, so
> it’s hard for me to compare its performance vs Logstash performance.
> However I’ve read that Heka possess a lot advantages over Logstash in this
> scope.
>
>
> But which version of Logstash did you test? One guy from the Logstash
> community said that: *“The next release of logstash (1.2.0 is in beta)
> has a 3.5x improvement in event throughput. For numbers: on my workstation
> at home (6 vcpu on virtualbox, host OS windows, 8 GB ram, host cpu is
> FX-8150) - with logstash 1.1.13, I can process roughly 31,000 events/sec
> parsing apache logs. With logstash 1.2.0.beta1, I can process 102,000
> events/sec.”*
>
>
> You also said that Heka is a unified data processing, but do we need this?
> Heka seems to address stream processing needs, while Logstash focuses
> mainly on processing logs. We want to create a central logging service, and
> Logstash was created especially for it and seems to work well for this
> application.
>
>
> One thing that is obvious is the fact that the Logstash is better known,
> more popular and tested. Maybe it has some performance disadvantages, but
> at least we know what we can expect from it. Also, it has more pre-built
> plugins and has a lot examples of usage, while Heka doesn’t have many of
> them yet and is nowhere near the range of plugins and integrations provided
> by Logstash.
>

>From my experience, Heka has already a large number of plugins that cover
most of the use cases. But I understand your concerns regarding the
adoption of Heka vs Logstash.


>
>
> In the case of adding plugins, I’ve read that in order to add Go plugins,
> the binary has to be recompiled, what is a little bit frustrating (static
> linking - to wire in new plugins, have to recompile). On the other hand,
> the Lua plugins do not require it, but the question is whether Lua plugins
> are sufficient? Or maybe adding Go plugins is not so bad?
>

For the reason that you pointed, Lua plugins are first-class citizens and
the Heka developers encourage their use over writing custom Go plugins. In
terms of performances, Lua and Go plugins are usually equivalent.


>
>
> You also said that you didn’t test the Heka with Docker, right? But do you
> have any experience in setting up Heka in Docker container? I saw that with
> Heka 0.8.0 new Docker features were implemented (included Dockerfiles to
> generate Heka Docker containers for both development and deployment), but
> did you test it? If you didn’t, we could not be sure whether there are any
> issues with it.
>

>From my experience, Heka runs in Docker without problem.


>
>
> Moreover you will have to write your own Dockerfile for Heka that inherits
> from Kolla base image (as we discussed during last meeting, we would like
> to have our own images), you won’t be able to inherit from
> ianneub/heka:0.10 as specified in the link that you sent
> http://www.ianneubert.com/wp/2015/03/03/how-to-use-heka-docker-and-tutum/.
>

Since the Heka binary embeds all its dependencies, writing the Dockerfile
shouldn't be hard.


>
>
> There are also some issues with DockerInput Module which you want to use.
> For example splitters are not available in DockerInput (
> https://github.com/mozilla-services/heka/issues/1643). I can’t say that
> it will affect us, but we also don’t know which new issues may arise during
> first tests, as any of us has ever tried Heka in and with Dockers.
>

Good point. This should be investigated by Eric in his specification.


>
>
> I am not stick to any specific solution, however just not sure whether
> Heka won’t surprise us with something hard to solve, configure, etc.
>

I just wanted to mention that Heka powers the Firefox Telemetry Data
Pipeline [1] which collects and processes many data.

Simon

[1] https://people.mozilla.org/~rmiller/heka-monitorama-2015-06/#/41


>
>
>
> * Alicja Kwaśniewska*
>
>
>
> *From:* Sam Yaple [mailto:sam...@yaple.net]
> *Sent:* Monday, January 11, 2016 11:37 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [kolla] Introduction of Heka in Kolla
>
>
>
> Here is why I am on board with this. As we have discovered, the logging
> with the syslog plugin leaves alot to be desired. It (to my understanding)
> still can't save tracebacks/stacktraces to the log files for whatever
> reason. stdout/stderr however works perfectly fine. That said the Docker
> log stuff has been a source of pain in the past, but it has gotten better.
> It does have the limitation of being only able to log one output at a time.
> This means, as an example, the neutron-dhcp-agent could send its logs to
> stdout/err but the dnsmasq process that it launch (that also has logs)
> would have to mix its logs in with the neutron logs in stdout/err. Can Heka
> handle this and separate them 

Re: [openstack-dev] [fuel] Using upstream packages & modules

2015-11-10 Thread Simon Pasquier
Hello Alex!

On Mon, Nov 9, 2015 at 9:29 PM, Alex Schultz  wrote:

> Hey folks,
>
> I'm testing[0] out flipping our current method of consuming upstream
> puppet modules from using pinned versions hosted on fuel-infra to be
> able to use the ones directly from upstream (master).  This work is
> primarily to be closer aligned with the other OpenStack projects as
> well as switching the current way we manage Fuel into a downstream of
> the upstream community version. As part of this work we have also been
> working towards improving the upstream modules support different
> package sets.  Specifically running Debian packages on Ubuntu[1][2].
> This work is the start of being able to allow a user of Fuel to be
> able to specify a specific package set and having it be able to work.
> If we can properly split out the puppet modules and package
> dependencies this will make Fuel a more flexible deployment engine as
> I believe we would be better positioned to support multiple versions
> of OpenStack for a given Fuel release.
>
> I'm currently working to get a PoC of Fuel consuming upstream puppet
> modules and the UCA packages together and documenting all of the
> issues so that we can address them.  So far I have been able to deploy
> the upstream modules via a custom ISO using the MOS package set and it
> works locally. Unfortunately the CI seems to be hitting some issues
> that I think might be related to recently merged keystone changes but
> I did not run into the same problem when running a manual deployment.
> As I work through this PoC, I'm also attempting to develop a small
> plugin that could be used to capture the work arounds to the
> deployment process. I've run into a few issues so far as I work to
> switch out the package sets.
>
> For the sake of providing additional visibility into this work, here
> are the issues that I've hit so far.
>
> The first issue I ran across is that currently the MOS repositories
> contain packages for both OpenStack and other system dependencies for
> creating our HA implementation.  This is problematic when we want to
> switch out the OpenStack packages but still want the MOS packages for
> our HA items.  I'm working around this by adding the UCA repository as
> a higher priorities for deployments.  As such I've run into an issue
> with the haproxy package that MOS provides vs the upstream Ubuntu
> package.  To get around this, I've pinned the MOS version for now
> until I can circle back around and figure out if the difference is a
> config or functionality issue.
>

I know at least for one difference between the MOS and Ubuntu versions:
HAProxy from MOS has a patch to support the "include" configuration
parameter.
This is unrelated to your mail but I think this fork should die since it
will never be accepted upstream and there are other ways to address the use
case [0].

HTH,
Simon

[0] http://marc.info/?l=haproxy=130817444025140=2


>
> The second issue that I have run across is that we are appending
> read_timeout=60 to our mysql connection strings for our
> configurations. This seems to be unsupported by the libraries and
> OpenStack components provided by the UCA package set.  I'm not sure
> how the priority of python-pymsql vs python-mysqldb is resolved as I
> had both packages installed but it continued to fail on read_timeout
> being in the connection string.  For now I've updated the fuel-library
> code to remove this item from the connection strings and will be
> circling back around to figure out the correct 'fix' for this issue.
>
> I'm hoping to be able to have a working ISO, plugin and a set of
> instructions that can be used to deploy a basic cloud using Fuel by
> the end of this week.
>
> Thanks,
> -Alex
>
> [0] https://review.openstack.org/#/c/240325/
> [1] https://review.openstack.org/#/c/241615/
> [2] https://review.openstack.org/#/c/241741/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] apt segfaults when too many repositories are configured

2015-11-09 Thread Simon Pasquier
FWIW, we tried to reproduce the bug on fresh environments and failed at it
(in other words, the deployment succeeds). We've also noticed that the
vmware-dvs plugin team has encountered the same bug [0]. If they can't
managed to reproduce the issue either, my guess would be that we faced a
transient problem with the remote package repositories.
BR,
Simon
[0] https://bugs.launchpad.net/fuel-plugins/+bug/1514043

On Fri, Nov 6, 2015 at 11:38 AM, Simon Pasquier <spasqu...@mirantis.com>
wrote:

> Hello,
>
> While testing LMA with MOS 7.0, we got apt-get crashing and failing the
> deployment. The details are in the LP bug [0], the TL;DR version is that
> when more repositories are added (hence more packages), there is a risk
> that apt-get commands fail badly when trying to remap memory.
>
> The core issue should be fixed in apt or glibc but in the mean time,
> increasing the APT::Cache-Start value makes the issue go way. This is what
> we're going to do with the LMA plugin but since it's independent of LMA,
> maybe it needs to be addressed at the Fuel level?
>
> BR,
> Simon
>
> [0] https://bugs.launchpad.net/lma-toolchain/+bug/1513539
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] apt segfaults when too many repositories are configured

2015-11-06 Thread Simon Pasquier
Hello,

While testing LMA with MOS 7.0, we got apt-get crashing and failing the
deployment. The details are in the LP bug [0], the TL;DR version is that
when more repositories are added (hence more packages), there is a risk
that apt-get commands fail badly when trying to remap memory.

The core issue should be fixed in apt or glibc but in the mean time,
increasing the APT::Cache-Start value makes the issue go way. This is what
we're going to do with the LMA plugin but since it's independent of LMA,
maybe it needs to be addressed at the Fuel level?

BR,
Simon

[0] https://bugs.launchpad.net/lma-toolchain/+bug/1513539
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][QA][Plugins] Move functional tests from fuel-qa to the plugins

2015-10-21 Thread Simon Pasquier
Mike, thanks for the clarification!
I've filed a bug against fuel-qa [0] and submitted a patch [1]. Note that
after a quick look, many Fuel projects have the same issue with the format
of the MAINTAINERS file. Do you think we need one bug per project or do we
piggy-back on the fuel-qa bug?
BR,
Simon
[0] https://bugs.launchpad.net/fuel/+bug/1508449
[1] https://review.openstack.org/#/c/238039/

On Wed, Oct 21, 2015 at 8:11 AM, Mike Scherbakov <mscherba...@mirantis.com>
wrote:

> Nastya,
> according to the template I provided initially [1] format in fuel-qa is
> invalid. I've requested to support only one format [2].
> File must always have a folder. If you want to cover the whole repo, then
> the right structure would be
>
> maintainers:
>
>
> - ./:
>
> - name:   ...
>
>   email:  ...
>
>   IRC:...
> e.g. you'd just refer to the current folder, which should be root of the
> repo by default.
> Simon is asking a valid request: if you add his folder in the file, he
> will be always added to the review request by script, once it's
> implemented. Only in the case when contribution is made to his particular
> area of responsibility.
>
> [1] https://github.com/openstack/fuel-web/blob/master/MAINTAINERS
> [2] https://bugs.launchpad.net/fuel/+bug/1497655
>
> On Tue, Oct 20, 2015 at 11:03 PM Anastasia Urlapova <
> aurlap...@mirantis.com> wrote:
>
>> Simon,
>> structure of fuel-web repo is much more complex than fuel-qa, ~ 50 active
>> contributors work with fuel-web.
>> There is the functionality of the different Fuel domains and each
>> requires its own expertise, so maintenance is divided by folders.
>> In case of fuel-qa maintainers are doing review for whole repository,
>> structure of file[0] is correct.
>>
>>
>> Nastya.
>> [0] https://github.com/openstack/fuel-qa/blob/master/MAINTAINERS
>>
>> On Wed, Oct 21, 2015 at 2:15 AM, Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>>
>>> Simon,
>>> I believe that it's a mistake in fuel-qa. Valid structure is in
>>> fuel-web. Please fix the one in fuel-qa.
>>>
>>> I'm also looking forward for automated adding of people to review
>>> requests based on this file. Here is the task to track it:
>>> https://bugs.launchpad.net/fuel/+bug/1497655
>>>
>>> On Tue, Oct 20, 2015 at 2:10 AM Simon Pasquier <spasqu...@mirantis.com>
>>> wrote:
>>>
>>>> Thanks for the reply, Andrew! I must admit that I haven't read
>>>> thoroughly the specification on the new team structure [1]. IIUC plugin
>>>> developers should be added to the MAINTAINERS file of fuel-qa for the
>>>> directories that concern their plugins. If I take LMA as an example, this
>>>> would be:
>>>> fuelweb_test/tests/plugins/plugin_elasticsearch
>>>> fuelweb_test/tests/plugins/plugin_lma_collector
>>>> fuelweb_test/tests/plugins/plugin_lma_infra_alerting
>>>>
>>>> Is that right?
>>>>
>>>> I can submit a change to fuel-qa for adding the LMA team to the
>>>> MAINTAINERS file but I can't figure out the structure of the YAML data:
>>>> fuel-web/MAINTAINERS [2] is organized as "{directory1: [maintainer1,
>>>> maintainer2, ...], directory2: [...], ...}" while for fuel-qa [3] (and
>>>> other Fuel projects), it's "[maintainer1, maintainer2, ...]".
>>>>
>>>> BR,
>>>> Simon
>>>>
>>>> [1]
>>>> http://specs.fuel-infra.org/fuel-specs-master/policy/team-structure.html
>>>> [2] https://github.com/openstack/fuel-web/blob/master/MAINTAINERS
>>>> [3] https://github.com/openstack/fuel-qa/blob/master/MAINTAINERS
>>>>
>>>>
>>>> On Sat, Oct 17, 2015 at 2:21 AM, Andrew Woodward <xar...@gmail.com>
>>>> wrote:
>>>>
>>>>> We have already discussed this to be a result of describing data
>>>>> driven testing, untill this spec is completed there is little sense to
>>>>> remove all of these since fuel-qa is 100% required to operate this way. In
>>>>> the interim we should just specify the appropriate SME with the 
>>>>> MAINTAINERS
>>>>> file.
>>>>>
>>>>> On Fri, Oct 16, 2015 at 11:34 AM Sergii Golovatiuk <
>>>>> sgolovat...@mirantis.com> wrote:
>>>>>
>>>>>> Tests should be in plugin
>>>>>>
>>>>>> --
>>>>>> Best regards,
>>>>>

Re: [openstack-dev] [Fuel][QA][Plugins] Move functional tests from fuel-qa to the plugins

2015-10-20 Thread Simon Pasquier
Thanks for the reply, Andrew! I must admit that I haven't read thoroughly
the specification on the new team structure [1]. IIUC plugin developers
should be added to the MAINTAINERS file of fuel-qa for the directories that
concern their plugins. If I take LMA as an example, this would be:
fuelweb_test/tests/plugins/plugin_elasticsearch
fuelweb_test/tests/plugins/plugin_lma_collector
fuelweb_test/tests/plugins/plugin_lma_infra_alerting

Is that right?

I can submit a change to fuel-qa for adding the LMA team to the MAINTAINERS
file but I can't figure out the structure of the YAML data:
fuel-web/MAINTAINERS [2] is organized as "{directory1: [maintainer1,
maintainer2, ...], directory2: [...], ...}" while for fuel-qa [3] (and
other Fuel projects), it's "[maintainer1, maintainer2, ...]".

BR,
Simon

[1] http://specs.fuel-infra.org/fuel-specs-master/policy/team-structure.html
[2] https://github.com/openstack/fuel-web/blob/master/MAINTAINERS
[3] https://github.com/openstack/fuel-qa/blob/master/MAINTAINERS

On Sat, Oct 17, 2015 at 2:21 AM, Andrew Woodward <xar...@gmail.com> wrote:

> We have already discussed this to be a result of describing data driven
> testing, untill this spec is completed there is little sense to remove all
> of these since fuel-qa is 100% required to operate this way. In the interim
> we should just specify the appropriate SME with the MAINTAINERS file.
>
> On Fri, Oct 16, 2015 at 11:34 AM Sergii Golovatiuk <
> sgolovat...@mirantis.com> wrote:
>
>> Tests should be in plugin
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> On Fri, Oct 16, 2015 at 5:58 PM, Simon Pasquier <spasqu...@mirantis.com>
>> wrote:
>>
>>> Hello Alexey,
>>>
>>> On Fri, Oct 16, 2015 at 5:35 PM, Alexey Elagin <aela...@mirantis.com>
>>> wrote:
>>>
>>>> Hello Simon!
>>>>
>>>> We are going to remove plugins' functional tests from fuel-qa because
>>>> this tests don't use for our plugins CI process.
>>>>
>>>
>>> And where are the existing tests going to be stored then?
>>>
>>> Thanks,
>>> Simon
>>>
>>>
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][QA][Plugins] Move functional tests from fuel-qa to the plugins

2015-10-16 Thread Simon Pasquier
Hello Fuelers!

I'd like to discuss something that I feel is important to improve the
quality of the Fuel plugins. Currently, the functional tests for Fuel
plugins are located in the fuel-qa project [1]. IMO this isn't viable in
the (mid) long term since:
- the fuel-qa cores have little knowledge about what a particular plugin is
supposed to do so they mostly focus on the code style.
- It increases the review load on the fuel-qa team.
- It doesn't encourage plugin developers to extend their test coverage
because it takes time to get a patch approved (no blame on the fuel-qa team
here).
- the fuel-qa repository gets cluttered with tests that aren't about the
Fuel core.

At some point, it was discussed that these functional tests should live in
the plugin's project itself. Is that still an option? What's missing to
make that happen?

FWIW, I would be more than happy to help kicking this off.

BR,
Simon

[1]
https://github.com/stackforge/fuel-qa/tree/2b7ce18e799d7096e589083246a2699c0cd6912e/fuelweb_test/tests/plugins
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][QA][Plugins] Move functional tests from fuel-qa to the plugins

2015-10-16 Thread Simon Pasquier
Hello Alexey,

On Fri, Oct 16, 2015 at 5:35 PM, Alexey Elagin  wrote:

> Hello Simon!
>
> We are going to remove plugins' functional tests from fuel-qa because this
> tests don't use for our plugins CI process.
>

And where are the existing tests going to be stored then?

Thanks,
Simon


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] request for update of fuel-plugin-builder on pypi

2015-09-09 Thread Simon Pasquier
Hi,
It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
pypi. We've moved some of the LMA plugins to use the v3 format.
Right now we have to install fpb from source which is hard to automate in
our tests unfortunately (as already noted by Sergii [1]).
BR,
Simon
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] add health check for plugins

2015-08-10 Thread Simon Pasquier
Hello Samuel,
This looks like an interesting idea. Do you have any concrete example to
illustrate your point (with one of your plugins maybe)?
BR,
Simon

On Mon, Aug 10, 2015 at 12:04 PM, Samuel Bartel samuel.bartel@gmail.com
 wrote:

 Hi all,

 actually with fuel plugins there are test for the plugins used by the
 CICD, but after a deployment it is not possible for the user to easily test
 if a plugin is crrectly deploy or not.
 I am wondering if it could be interesting to improve the fuel plugin
 framework in order to be able to define test for each plugin which would ba
 dded to the health Check. the user would be able to test the plugin when
 testing the deployment test.

 What do you think about that?


 Kind regards

 Samuel

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Monasca mid-cycle meetup

2015-07-22 Thread Simon Pasquier
Hi,
I've had a quick look at the Etherpad which mentions Ceilosca. The name
is quite intriguing but I didn't find any reference to it in the Monasca
Wiki. Could you tell us a bit more about it? Does it mean that Monasca
plans to expose an API that would be compatible with the Ceilometer API?
BR,
Simon

On Wed, Jul 22, 2015 at 8:08 PM, Hochmuth, Roland M roland.hochm...@hp.com
wrote:

 The Monasca mid-cycle meet up will be held at the HP campus in Fort
 Collins, CO from August 5-6. Further details on the location, time and
 tentative agenda can be found at

 https://etherpad.openstack.org/p/monasca_liberty_mid_cycle

 Regards --Roland

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-library] Using librarian-puppet to manage upstream fuel-library modules

2015-07-10 Thread Simon Pasquier
Alex, could you enable the comments for all on your document?
Thanks!
Simon

On Thu, Jul 9, 2015 at 11:07 AM, Bogdan Dobrelya bdobre...@mirantis.com
wrote:

  Hello everyone,
 
  I took some time this morning to write out a document[0] that outlines
  one possible ways for us to manage our upstream modules in a more
  consistent fashion. I know we've had a few emails bouncing around
  lately around this topic of our use of upstream modules and how can we
  improve this. I thought I would throw out my idea of leveraging
  librarian-puppet to manage the upstream modules within our
  fuel-library repository. Ideally, all upstream modules should come
  from upstream sources and be removed from the fuel-library itself.
  Unfortunately because of the way our repository sits today, this is a
  very large undertaking and we do not currently have a way to manage
  the inclusion of the modules in an automated way. I believe this is
  where librarian-puppet can come in handy and provide a way to manage
  the modules. Please take a look at my document[0] and let me know if
  there are any questions.
 
  Thanks,
  -Alex
 
  [0]
 https://docs.google.com/document/d/13aK1QOujp2leuHmbGMwNeZIRDr1bFgJi88nxE642xLA/edit?usp=sharing

 The document is great, Alex!
 I'm fully support the idea to start adapting fuel-library by
 the suggested scheme. The monitoring feature of ibrarian looks not
 intrusive and we have no blockers to start using the librarian just
 immediately.

 --
 Best regards,
 Bogdan Dobrelya,
 Irc #bogdando

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][Gnocchi] question on integration with time-series databases

2015-06-16 Thread Simon Pasquier
Hi,

Originally, I posted this question on the review [0] that adds InfluxDB
support to Gnocchi but Julien felt that it wasn't relevant in the scope of
the review. Still I think that it deserves some discussion...

The current implementation of the InfluxDB driver for Gnocchi doesn't
follow the recommendations for InfluxDB 0.9 [1] as it doesn't use tags at
all. As a result, each metric will be stored in an individual series which
makes aggregation across metrics suboptimal from the InfluxDB point of
view. With tags properly implemented, a query like 'return the cpu.util
measures for this group of servers in this given interval' is only one
InfluxDB query while it would result in N queries with the proposed change.
In fact, the same issue can be seen in the OpenTSDB [2] and KairosDB [3]
reviews too. And my guess is that all production-grade backends will
provide the same type of semantic on metrics (call it tags, labels or
dimensions).

Julien's anwser to this was:

There's no point in talking about optimizing a driver until it's
implemented. For now, neither InfluxDB or Kairos nor OpenTSDB drivers are
ready for Gnocchi. Once they are, we'll be able to talk about changing the
implementation of the storage/driver API to leverage their abilities such
as tags.

I'm still struggling to see how these optimizations would be implemented
since the current Gnocchi design has separate backends for indexing and
storage which means that datapoints (id + timestamp + value) and metric
metadata (tenant_id, instance_id, server group, ...) are stored into
different places. I'd be interested to hear from the Gnocchi team how this
is going to be tackled. For instance, does it imply modifications or
extensions to the existing Gnocchi API?

BR,
Simon

[0] https://review.openstack.org/#/c/165407/
[1] http://influxdb.com/docs/v0.9/concepts/schema_and_data_layout.html
[2] https://review.openstack.org/#/c/107986
[3] https://review.openstack.org/#/c/159476
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] type available

2015-05-28 Thread Simon Pasquier
Hello Samuel,
AFAIK there is no such feature yet. The schema validators are here [1] [2].
BR,
Simon

[1]
https://github.com/stackforge/fuel-plugins/blob/master/fuel_plugin_builder/fuel_plugin_builder/validators/schemas/base.py
[1]
https://github.com/stackforge/fuel-plugins/blob/master/fuel_plugin_builder/fuel_plugin_builder/validators/schemas/v2.py

On Thu, May 28, 2015 at 1:02 PM, Samuel Bartel samuel.bartel@gmail.com
wrote:

 Hi folks,

 is there anyway in the environment_config.yaml file of the plugin to
 define some attrobutes as title (something similar to the the metadata in
 the openstack.yaml of fuel-web).

 I would like to add somes titles in the plugin config in order to define
 section into the ui for the plugin and make it easier to read.

 something like
 section1

 param1: value description
 param2: value  descritpion

 section3

 param3: value description
 param4: value  descritpion

 I have tried different types and iI ahve got alway errors by fpb when
 trying to build the plugin
 in the meantime, I haven't been able to find out the information into
 validator  used by the fpb

 any tips?

 Samuel

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][cloudpulse] Announcing a project to HealthCheck OpenStack deployments

2015-05-13 Thread Simon Pasquier
On Wed, May 13, 2015 at 3:27 PM, David Kranz dkr...@redhat.com wrote:

  On 05/13/2015 09:06 AM, Simon Pasquier wrote:

   Hello,

 Like many others commented before, I don't quite understand how unique are
 the Cloudpulse use cases.

 For operators, I got the feeling that existing solutions fit well:
 - Traditional monitoring tools (Nagios, Zabbix, ) are necessary anyway
 for infrastructure monitoring (CPU, RAM, disks, operating system, RabbitMQ,
 databases and more) and diagnostic purposes. Adding OpenStack service
 checks is fairly easy if you already have the toolchain.

 Is it really so easy? Rabbitmq has an aliveness test that is easy to
 hook into. I don't know exactly what it does, other than what the doc says,
 but I should not have to. If I want my standard monitoring system to call
 into a cloud and ask is nova healthy?, is glance healthy?, etc. are
 their such calls?


Regarding RabbitMQ aliveness test, it has its own limits (more on that
latter, I've got an interesting RabbitMQ outage that I'm going to discuss
in a new thread) and it doesn't replicate exactly what the clients (eg
OpenStack services) are doing.

Regarding the service checks, there are already plenty of scripts that
exist for Nagios, Collectd and so on. Some of them are listed in the Wiki
[1].


 There are various sets of calls associated with nagios, zabbix, etc. but
 those seem like after-market parts for a car. Seems to me the services
 themselves would know best how to check if they are healthy, particularly
 as that could change version to version. Has their been discussion of
 adding a health-check (admin) api in each service? Lacking that, is there
 documentation from any OpenStack projects about how to check the health of
 nova? When I saw this thread start, that is what I thought it was going to
 be about.


Starting with Kilo, you could configure your OpenStack API services with
the healthcheck middleware [2]. This has been inspired by what Swift's been
doing for some time now [3].IIUC the default healthcheck is minimalist and
doesn't check that dependent services (like RabbitMQ, database) are healthy
but the framework is extensible and more healthchecks can be added.



  -David


BR,
Simon

[1] https://wiki.openstack.org/wiki/Operations/Tools#Monitoring_and_Trending
[2]
http://docs.openstack.org/developer/oslo.middleware/api.html#oslo_middleware.Healthcheck
[3]
http://docs.openstack.org/kilo/config-reference/content/object-storage-healthcheck.html



- OpenStack projects like Rally or Tempest can generate synthetic
 loads and run end-to-end tests. Integrating them with a monitoring system
 isn't terribly difficult either.

 As far as Monitoring-as-a-service is concerned, do you have plans to
 integrate/leverage Ceilometer?

  BR,
  Simon

 On Tue, May 12, 2015 at 7:20 PM, Vinod Pandarinathan (vpandari) 
 vpand...@cisco.com wrote:

   Hello,

I'm pleased to announce the development of a new project called
 CloudPulse.  CloudPulse provides Openstack
  health-checking services to both operators, tenants, and applications.
 This project will begin as
  a StackForge project based upon an empty cookiecutter[1] repo.  The
 repos to work in are:
  Server:   https://github.com/stackforge/cloudpulse
  Client: https://github.com/stackforge/python-cloudpulseclient

  Please join us via iRC on #openstack-cloudpulse on freenode.

  I am holding a doodle poll to select times for our first meeting the
 week after summit.  This doodle poll will close May 24th and meeting times
 will be announced on the mailing list at that time.  At our first IRC
 meeting,
  we will draft additional core team members, so if your interested in
 joining a fresh new development effort, please attend our first meeting.
  Please take a moment if your interested in CloudPulse to fill out the
 doodle poll here:

  https://doodle.com/kcpvzy8kfrxe6rvb

  The initial core team is composed of
  Ajay Kalambur,
  Behzad Dastur, Ian Wells, Pradeep chandrasekhar, Steven Dake and Vinod
 Pandarinathan.
  I expect more members to join during our initial meeting.

   A little bit about CloudPulse:
   Cloud operators need notification of OpenStack failures before a
 customer reports the failure. Cloud operators can then take timely
 corrective actions with minimal disruption to applications.  Many cloud
 applications, including
  those I am interested in (NFV) have very stringent service level
 agreements.  Loss of service can trigger contractual
  costs associated with the service.  Application high availability
 requires an operational OpenStack Cloud, and the reality
  is that occascionally OpenStack clouds fail in some mysterious ways.
 This project intends to identify when those failures
  occur so corrective actions may be taken by operators, tenants, and the
 applications themselves.

  OpenStack is considered healthy when OpenStack API services respond
 appropriately.  Further OpenStack is
  healthy when network traffic can be sent between

Re: [openstack-dev] [Fuel] interaction between fuel-plugin and fuel-UI

2015-05-07 Thread Simon Pasquier
Hello Samuel,
As far as I know, this isn't possible unfortunately. For our own needs, we
ended up adding a fixed-size list with all items but the first one
disabled. When you enter something in the first input box, it enabled the
second box and so on (see [1]). In any case, this would be a good
addition...
BR,
Simon
[1]
https://github.com/stackforge/fuel-plugin-elasticsearch-kibana/blob/master/environment_config.yaml#L21

On Thu, May 7, 2015 at 3:37 PM, Samuel Bartel samuel.bartel@gmail.com
wrote:

 Hi all,



 I am working on two plugins for fuel : logrotate and cinder-netapp (to add
 multibackend feature)

 In this two plugins I face the same problem. Is it possible in the
 environment yaml config describing the fields to display for the plugin in
 the UI to have some dynamic element.

 I explain my need. I would like to be able to add additional element by
 clicking on a “+” button as the IP range for network tab in order to be
 able to:

 -add new log file to manage for the logrorate instead of having a static
 list

 -add extra netapp filer/volume instead ofbeing able to setup only one for
 the cinder netapp in a multibackend scope.

 For the cinder netapp for example, I would be able to access to the netapp
 server hostname with:

 $::fuel_settings[‘cinder_netapp’][0][‘netapp_server_hostname’]  #for the
 first one

 $::fuel_settings[‘cinder_netapp’][1][‘netapp_server_hostname’]  #for the
 second  one

 And so on.



 Can we do that with the actual plugin feature.  If not is it planned to
 add such a feature?



 Regards,


 Samuel

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-26 Thread Simon Pasquier
On Wed, Feb 25, 2015 at 9:47 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Wed, Feb 25, 2015, at 02:59 PM, Robert Collins wrote:
  On 26 February 2015 at 08:54, melanie witt melwi...@gmail.com wrote:
   On Feb 25, 2015, at 10:51, Duncan Thomas duncan.tho...@gmail.com
 wrote:
  
   Is there anybody who'd like to step forward in defence of this rule
 and explain why it is an improvement? I don't discount for a moment the
 possibility I'm missing something, and welcome the education in that case
  
   A reason I can think of would be to preserve namespacing (no
 possibility of function or class name collision upon import). Another
 reason could be maintainability, scenario being: Person 1 imports ClassA
 from a module to use, Person 2 comes along later and needs a different
 class from the module so they import ClassB from the same module to use,
 and it continues. If only the module had been imported, everybody can just
 do module.ClassA, module.ClassB instead of potentially multiple imports
 from the same module of different classes and functions. I've also read it
 doesn't cost more to import the entire module rather than just a function
 or a class, as the whole module has to be parsed either way.
 
  I think the primary benefit is that when looking at the code you can
  tell where a name came from. If the code is using libraries that one
  is not familiar with, this makes finding the implementation a lot
  easier (than e.g. googling it and hoping its unique and not generic
  like 'create' or something.

 I think the rule originally came from the way mock works. If you import
 a thing in your module and then a test tries to mock where it came from,
 your module still uses the version it imported because the name lookup
 isn't done again at the point when the test runs. If all external
 symbols are accessed through the module that contains them, then the
 lookup is done at runtime instead of import time and mocks can replace
 the symbols. The same benefit would apply to monkey patching like what
 eventlet does, though that's less likely to come up in our own code than
 it is for third-party and stdlib modules.


I second Doug's analysis. I've already had a hard time figuring out why
mock wasn't doing what I wanted it to do and following H302 just fixed it...

BR,
Simon



 Doug

 
  -Rob
 
  --
  Robert Collins rbtcoll...@hp.com
  Distinguished Technologist
  HP Converged Cloud
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cache for packages on master node

2015-02-10 Thread Simon Pasquier
Hello Tomasz,
In a previous life, I used squid to speed up packages downloads and it
worked just fine...
Simon

On Tue, Feb 10, 2015 at 3:24 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:

 Hi,

 We are currently redesigning our apporach to upstream distributions and
 obviusly we will need some cache system for packages on master node. It
 should work for deb and rpm packages, and be able to serve up to 200 nodes.
 I know we had bad experience in the past, can you guys share your thought
 on that?
 I just collected what was mentioned in other discussions:
 - approx
 - squid
 - apt-cacher-ng
 - ?

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2015-02-02 Thread Simon Pasquier
Hello,
(resurrecting this old thread because I think I found the root cause)

The problem affects all OpenStack environments using Syslog, not only
Fuel-based installations: when use_syslog is true, the
logging_context_format_string and logging_default_format_string parameters
aren't taken into account (see [1] for details).
The issue is fixed in oslo.log but not in oslo-incubator/log (See [2]).
Depending on when the different projects synchronized with oslo-incubator
during the Juno timeframe, some of them are immune to the bug (from the
Fuel bug report: heat, glance and neutron). As such the bug will affect all
projects that don't switch to oslo.log during the Kilo cycle.

BR,
Simon

[1] https://bugs.launchpad.net/oslo.log/+bug/1399088
[2] https://review.openstack.org/#/c/151157/

On Fri, Dec 12, 2014 at 7:35 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 We have a high priority bug in 6.0:
 https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.

 Our openstack services use to send logs in strange format with extra copy
 of timestamp and loglevel:
 == ./neutron-metadata-agent.log ==
 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO
 neutron.common.config [-] Logging enabled!

 And we have a workaround for this. We hide extra timestamp and use second
 loglevel.

 In Juno some of services have updated oslo.logging and now send logs in
 simple format:
 == ./nova-api.log ==
 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
 /etc/nova/api-paste.ini

 In order to keep backward compatibility and deal with both formats we have
 a dirty workaround for our workaround:
 https://review.openstack.org/#/c/141450/

 As I see, our best choice here is to throw away all workarounds and show
 logs on UI as is. If service sends duplicated data - we should show
 duplicated data.

 Long term fix here is to update oslo.logging in all packages. We can do it
 in 6.1.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-30 Thread Simon Pasquier
On Fri, Jan 30, 2015 at 3:05 AM, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp
wrote:

  -Original Message-
  From: Roman Podoliaka [mailto:rpodoly...@mirantis.com]
  Sent: Friday, January 30, 2015 2:12 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [api][nova] Openstack HTTP error codes
 
  Hi Anne,
 
  I think Eugeniya refers to a problem, that we can't really distinguish
  between two different  badRequest (400) errors (e.g. wrong security
  group name vs wrong key pair name when starting an instance), unless
  we parse the error description, which might be error prone.

 Yeah, current Nova v2 API (not v2.1 API) returns inconsistent messages
 in badRequest responses, because these messages are implemented at many
 places. But Nova v2.1 API can return consistent messages in most cases
 because its input validation framework generates messages automatically[1].


When you say most cases, you mean JSON schema validation only, right?
IIUC, this won't apply to the errors described by the OP such as invalid
key name, unknown security group, ...

Thanks,
Simon



 Thanks
 Ken'ichi Ohmichi

 ---
 [1]:
 https://github.com/openstack/nova/blob/master/nova/api/validation/validators.py#L104

  On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
  annegen...@justwriteclick.com wrote:
  
  
   On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
   ekudryash...@mirantis.com wrote:
  
   Hi, all
  
  
   Openstack APIs interact with each other and external systems
 partially by
   passing of HTTP errors. The only valuable difference between types of
   exceptions is HTTP-codes, but current codes are generalized, so
 external
   system can’t distinguish what actually happened.
  
  
   As an example two different failures below differs only by error
 message:
  
  
   request:
  
   POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
  
   Host: 192.168.122.195:8774
  
   X-Auth-Project-Id: demo
  
   Accept-Encoding: gzip, deflate, compress
  
   Content-Length: 189
  
   Accept: application/json
  
   User-Agent: python-novaclient
  
   X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
  
   Content-Type: application/json
  
  
   {server: {name: demo, imageRef:
   171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test,
 flavorRef:
   42, max_count: 1, min_count: 1, security_groups: [{name:
 bar}]}}
  
   response:
  
   HTTP/1.1 400 Bad Request
  
   Content-Length: 118
  
   Content-Type: application/json; charset=UTF-8
  
   X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0
  
   Date: Fri, 23 Jan 2015 10:43:33 GMT
  
  
   {badRequest: {message: Security group bar not found for project
   790f5693e97a40d38c4d5bfdc45acb09., code: 400}}
  
  
   and
  
  
   request:
  
   POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
  
   Host: 192.168.122.195:8774
  
   X-Auth-Project-Id: demo
  
   Accept-Encoding: gzip, deflate, compress
  
   Content-Length: 192
  
   Accept: application/json
  
   User-Agent: python-novaclient
  
   X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71
  
   Content-Type: application/json
  
  
   {server: {name: demo, imageRef:
   171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo,
 flavorRef:
   42, max_count: 1, min_count: 1, security_groups: [{name:
   default}]}}
  
   response:
  
   HTTP/1.1 400 Bad Request
  
   Content-Length: 70
  
   Content-Type: application/json; charset=UTF-8
  
   X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5
  
   Date: Fri, 23 Jan 2015 10:39:43 GMT
  
  
   {badRequest: {message: Invalid key_name provided., code: 400}}
  
  
   The former specifies an incorrect security group name, and the latter
 an
   incorrect keypair name. And the problem is, that just looking at the
   response body and HTTP response code an external system can’t
 understand
   what exactly went wrong. And parsing of error messages here is not
 the way
   we’d like to solve this problem.
  
  
   For the Compute API v 2 we have the shortened Error Code in the
   documentation at
  
 http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses
  
   such as:
  
   Error response codes
   computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
   unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
   itemNotFound (404), buildInProgress (409)
  
   Thanks to a recent update (well, last fall) to our build tool for docs.
  
   What we don't have is a table in the docs saying computeFault has this
   longer Description -- is that what you are asking for, for all
 OpenStack
   APIs?
  
   Tell me more.
  
   Anne
  
  
  
  
   Another example for solving this problem is AWS EC2 exception codes
 [1]
  
  
   So if we have some service based on Openstack projects it would be
 useful
   to have some concrete error codes(textual or numeric), which could
 allow to
   define what actually goes wrong and later correctly process obtained
   exception. 

Re: [openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?

2015-01-26 Thread Simon Pasquier
Hello,

I pretty much agree with Evgeniya here. Keeping everything (code, docs,
specs and tests) in the same repo is essential to keep up-to-date
information. Otherwise chances are that it will diverge eventually.
See other comments inline.

BR,

Simon

On Fri, Jan 23, 2015 at 4:50 PM, Evgeniya Shumakher eshumak...@mirantis.com
 wrote:

 Folks -

 I support the idea to keep plugins' code and other artifacts, e.g. design
 specs, installation and user guides, test scripts, test plan, test report,
 etc, in one repo, just to create dedicated folders for that.
 My argument here is pretty simple, i consider a Fuel plugin as a separate
 and independent project, which should be stored in a dedicated repo and
 maintained by the plugin development team.

 But i don't see why we can't use Fuel Launchpad [1] to create blueprints
 if we think it's necessary, but a BP itself shouldn't be a 'must do' for
 those who are working on Fuel plugins.

 And couple more comments:

1. Have a separate stackforge repo per Fuel plugin in format
fuel-plugin-name, with separate core-reviewers group which should have
plugin contributor initially

 On stackforge.
 Right now there are 4 Fuel plugins developed (GlusterFS, NetApp, LBaaS,
 VPNaaS) and 4 more are coming (NFS, FWaaS, Contrail, EMC VNX). Keeping in
 mind that the number of Fuel plugins will grow, does it make sense to keep
 them in stackforge?
 Mike, Alexander, we discussed an option to keep everything in fuel-infra
 [3].
 I would like to hear what other folks think about that.


Sounds like a good idea to use Fuel infra. From my recent experience, the
Fuel plugin framework is easy to work with and there will probably be many
plugins adding to the list. Asking for a new repository or for access right
modifications is going to be put a burden on the OpenStack infra team if it
happens too often.



 On the repo name.
 I would suggest to add the name of OpenStack component the plugin works
 with also fuel-plugin-component-name, e.g.
 fuel-plugin-cinder-emc-vnx.


Ok for plugins that deal with specific OpenStack services but this might
not be true for all plugins.



1. Have docs folder in the plugin, and ability to build docs out of it
   - do we want Sphinx or simple Github docs format is Ok? So people
   can just go to github/stackforge to see docs

 I agree with Evgeniy. We are talking about best practices of Fuel plugin
 development. I would prefer to keep them as simple and as easy as possible.


Definitely +1.


1. Have specification in the plugin repo
   - also, do we need Sphinx here?


1. Have plugins tests in the repo

 So, here is how the plugin repo structure could look like:

- fuel-plugin-component-name
- specs
   - plugin
   - tests
   - docs
   - utils

 Alexander -

 I don't think that putting these specs [4, 5] to fuel-specs [6] is a good
 idea.
 Let's come to an agreement, so plugin developers will know where they
 should commit code,specs and other docs.

 Looking forward to your comments.
 Thanks.


 [1] https://launchpad.net/fuel
 [2] https://github.com/stackforge
 [3] https://review.fuel-infra.org/
 [4] https://review.openstack.org/#/c/129586/
 [5] https://review.openstack.org/#/c/148475/4
 [6] https://github.com/stackforge/fuel-specs

 On Fri, Jan 23, 2015 at 4:14 PM, Alexander Ignatov aigna...@mirantis.com
 wrote:

 Mike,

 I also wanted to add that there is a PR already on adding plugins
 repos to stackforge: https://review.openstack.org/#/c/147169/

 All this looks good, but it’s not clear when this patch will be merged
 and repos are created.
 So the question is what should we do with the current spec made in
 fuel-specs[1,2] which are targeted for plugins?
 And how will look development process for plugins added to 6.1 roadmap?
 Especially for plugins came not from external vendors and partners. Will
 we create separate projects on the Launchpad and duplicate our
 For now I’m not sure if we need to wait for new infrastructure created in
 stackforge/launchpad for each plugin and follow the common
 procedure to land current plugins to existing repos during 6.1 milestone.

 [1] https://review.openstack.org/#/c/129586/
 [2] https://review.openstack.org/#/c/148475/4

 Regards,
 Alexander Ignatov



 On 23 Jan 2015, at 12:43, Nikolay Markov nmar...@mirantis.com wrote:

 I also wanted to add that there is a PR already on adding plugins
 repos to stackforge: https://review.openstack.org/#/c/147169/

 There is a battle in comments right now, because some people are not
 agree that so many repos are needed.

 On Fri, Jan 23, 2015 at 1:25 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:

 Hi Fuelers,
 we've implemented pluggable architecture piece in 6.0, and got a number of
 plugins already. Overall development process for plugins is still not
 fully
 defined.
 We initially thought that having all the plugins in one repo on stackforge
 is Ok, we also put some docs into existing fuel-docs repo, and 

Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-27 Thread Simon Pasquier
I've added another option to the Etherpad: collectd can do basic threshold
monitoring and run any kind of scripts on alert notifications. The other
advantage of collectd would be the RRD graphs for (almost) free.
Of course since monit is already supported in Fuel, this is the fastest
path to get something done.
Simon

On Thu, Nov 27, 2014 at 9:53 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Is it possible to send http requests from monit, e.g for creating
 notifications?
 I scanned through the docs and found only alerts for sending mail,
 also where token (username/pass) for monit will be stored?

 Or maybe there is another plan? without any api interaction

 On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

  This I didn't know. It's true in fact, I checked the manifests. Though
 monit is not deployed yet because of lack of packages in Fuel ISO. Anyways,
 I think the argument about using yet another monitoring service is now
 rendered invalid.

 So +1 for monit? :)

 P.


 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

 Monit is easy and is used to control states of Compute nodes. We can
 adopt it for master node.

  --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 As for me - zabbix is overkill for one node. Zabbix Server + Agent +
 Frontend + DB + HTTP server, and all of it for one node? Why not use
 something that was developed for monitoring one node, doesn't have many
 deps and work out of the box? Not necessarily Monit, but something similar.

 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's
 used already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] config options not correctly deprecated

2014-11-14 Thread Simon Pasquier
FYI, I've forwarded this thread to the operators mailing list as I feel
they will be very much interested by this discussion.
BR
Simon

On Fri, Nov 14, 2014 at 1:37 AM, Sean Dague s...@dague.net wrote:

 On 11/13/2014 06:56 PM, Clint Byrum wrote:
  Excerpts from Ben Nemec's message of 2014-11-13 15:20:47 -0800:
  On 11/10/2014 05:00 AM, Daniel P. Berrange wrote:
  On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote:
  Tl;dr oslo.config wasn't logging warnings about deprecated config
  options, do we need to support them for another cycle?
  AFAIK, there has not been any change in olso.config behaviour
  in the Juno release, as compared to previous releases. The
  oslo.config behaviour is that the generated sample config file
  contain all the deprecation information.
 
  The idea that olso.config issue log warnings is a decent RFE
  to make the use of deprecated config settings more visible.
  This is an enhancement though, not a bug.
 
  A set of patches to remove deprecated options in Nova was landed on
  Thursday[1], these were marked as deprecated during the juno dev cycle
  and got removed now that kilo has started.
  Yes, this is our standard practice - at the start of each release
  cycle, we delete anything that was marked as deprected in the
  previous release cycle. ie we give downstream users/apps 1 release
  cycle of grace to move to the new option names.
 
  Most of the deprecated config options are listed as deprecated in the
  documentation for nova.conf changes[2] linked to from the Nova upgrade
  section in the Juno release notes[3] (the deprecated cinder config
  options are not listed here along with the allowed_direct_url_schemes
  glance option).
  The sample  nova.conf generated by olso lists all the deprecations.
 
  For example, for cinder options it shows what the old config option
  name was.
 
[cinder]
 
#
# Options defined in nova.volume.cinder
#
 
# Info to match when looking for cinder in the service
# catalog. Format is: separated values of the form:
# service_type:service_name:endpoint_type (string value)
# Deprecated group/name - [DEFAULT]/cinder_catalog_info
#catalog_info=volume:cinder:publicURL
 
  Also note the deprecated name will not appear as an option in the
  sample config file at all, other than in this deprecation comment.
 
 
  My main worry is that there were no warnings about these options being
  deprecated in nova's logs (as a result they were still being used in
  tripleo), once I noticed tripleo's CI jobs were failing and discovered
  the reason I submitted 4 reverts to put back the deprecated options in
  nova[4] as I believe they should now be supported for another cycle
  (along with a fix to oslo.config to log warnings about their use).
 The 4
  patches have now been blocked as they go against our deprecation
 policy.
 
  I believe the correct way to handle this is to support these options
 for
  another cycle so that other operators don't get hit when upgrading to
  kilo. While at that same time fix oslo.config to report the deprecated
  options in kilo.
  I have marked this mail with the [all] tag because there are other
  projects using the same deprecated_name (or deprecated_group)
  parameter when adding config options, I think those projects also now
  need to support their deprecated options for another cycle.
  AFAIK, there's nothing different about Juno vs previous release cycles,
  so I don't see any reason to do anything different this time around.
  No matter what we do there is always a possibility that downstream
  apps / users will not notice and/or ignore the deprecation. We should
  certainly look at how to make deprecation more obvious, but I don't
  think we should change our policy just because an app missed the fact
  that these were deprecated.
  So the difference to me is that this cycle we are aware that we're
  creating a crappy experience for deployers.  In the past we didn't have
  anything in the CI environment simulating a real deployment so these
  sorts of issues went unnoticed.  IMHO telling deployers that they have
  to troll the sample configs and try to figure out which deprecated opts
  they're still using is not an acceptable answer.
 
  I don't know if this is really fair, as all of the deprecated options do
  appear here:
 
 
 http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html
 
  So the real bug is that in TripleO we're not paying attention to the
  appropriate stream of deprecations. Logs on running systems is a mighty
  big hammer when the documentation is being updated for us, and we're
  just not paying attention in the right place.
 
  BTW, where SHOULD continuous deployers pay attention for this stuff?
 
  Now that we do know, I think we need to address the issue.  The first
  step is to revert the deprecated removals - they're not hurting
  anything, and if we wait another cycle we can fix oslo.config and then
  remove them once 

Re: [openstack-dev] [Neutron] FWaaS/Security groups Not blocking ongoing traffic

2014-10-27 Thread Simon Pasquier
Hello Itzik,
This has been discussed lately on this ML. Please see
https://bugs.launchpad.net/neutron/+bug/1335375.
BR,
Simon

On Mon, Oct 27, 2014 at 1:17 PM, Itzik Brown itbr...@redhat.com wrote:


 Hi,

 When building a firewall with a rule to block a specific Traffic - the
 current traffic is not blocked.

 For example:

 Running a Ping to an instance and then building a firewall with a rule to
 block ICMP to this instance doesn't have affect while the ping command is
 still running.
 Exiting the command and then trying pinging the Instance again shows the
 desired result - i.e. the traffic is blocked.

 It also the case when using security groups to block traffic.

 Is this the desired outcome or is it a bug?

 Itzik

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2014-09-26 Thread Simon Pasquier
On Fri, Sep 26, 2014 at 10:19 AM, Christopher Yeoh cbky...@gmail.com
wrote:

 On Fri, 26 Sep 2014 11:25:49 +0400
 Oleg Bondarev obonda...@mirantis.com wrote:

  On Fri, Sep 26, 2014 at 3:30 AM, Day, Phil philip@hp.com wrote:
 
I think the expectation is that if a user is already interaction
   with Neutron to create ports then they should do the security group
   assignment in Neutron as well.
  
 
  Agree. However what do you think a user expects when he/she boots a
  vm (no matter providing port_id or just net_id)
  and specifies security_groups? I think the expectation should be that
  instance will become a member of the specified groups.
  Ignoring security_groups parameter in case port is provided (as it is
  now) seems completely unfair to me.

 One option would be to return a 400 if both port id and security_groups
 is supplied.


FWIW this is what has been implemented in Heat when such request is made
(see discussion on the bug report and [1])

Simon

[1]
http://git.openstack.org/cgit/openstack/heat/commit/?id=5c5e36de3737a85bec5023c94265e6bbaf6ad78e



 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Re: SSL in Fuel.

2014-09-11 Thread Simon Pasquier
Hi,

On Thu, Sep 11, 2014 at 1:03 PM, Sebastian Kalinowski 
skalinow...@mirantis.com wrote:

 I have some topics for [1] that I want to discuss:

 1) Should we allow users to turn SSL on/off for Fuel master?
 I think we should since some users may don't care about SSL and
 enabling it will just make them unhappy (like warnings in browsers,
 expiring certs).


Definitely +1. I think that Tomasz mentioned somewhere that HTTP should be
kept as the default.


 2) Will we allow users (in first iteration) to use their own certs?
 If we will (which I think we should and other people aslo seems to
 share this point of view), we have some options for that:
  A) Add informations to docs where to upload your own certificate on
 master node (no UI) - less work, but requires a little more action from
 users
  B) Simple form in UI where user will be able to paste his certs -
 little bit more work, user friendly
 Are there any reasons we shouldn't do that?


Option A is enough. If there is enough time to implement option B, that's
cool but this should not be a blocker.


 3) How we will manage cert expiration?
 Stanislaw proposed that we should show user a notification that will
 tell user about cert expiration. We could check that in cron job.
 I think that we should also allow user to generate a new cert in Fuel
 if the old one will expire.


As long as the user cannot upload a certificate, we don't need to care
about this point but it should be mentioned in the doc.
And to avoid this problem, Fuel should generate certificates that expire in
many years (eg = 10).

BR

Simon


 I'll also remove part about adding cert validation in fuel agent since it
 would require a significant amount of work and it's not essential for first
 iteration.

 Best,
 Sebastian


 [1] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Re: SSL in Fuel.

2014-09-10 Thread Simon Pasquier
Hello,

Thanks for the detailed email, Stanislaw. Your suggestion of deploying a CA
container is really interesting. Especially for OSTF and other testing
since the tools only need to know about the root CA.

Lets back up a bit and list the different options for Fuel users:
0/ The user is happy with plain HTTP.
= Already supported :)
1/ The user wants HTTPS but doesn't want the burden associated with
certificate management.
= Fuel creates and manages the SSL certificates, be them self-signed or
signed by some internal CA.
= Using an internal CA instead of multiple self-signed certificates is
cleaner as you explained.
2/ The user wants HTTPS and wants to use certificates which are generated
by an external source (either some internal corporate PKI or some public
certificate authority)
= Fuel supports certificate + key uploads
= It should be possible to tell Fuel which entity (Fuel, OSt environment)
uses which certificate
3/ The user wants HTTPS and agrees to let Fuel generating certificates on
behalf of some corporate PKI.
= Fuel supports CA + key uploads

I think that option 1 is the way to go for a first approach. Option 2 is
definitely something that end-users would need at some point. I'm less
convinced by option 3: if I were a PKI admin, I'll be reluctant to let Fuel
generate certificates on its own. Also my gut feeling tells me that
implementing 1  2 is already quite a lot of work.

I've also added some questions/comments inline.

BR,

Simon

On Tue, Sep 9, 2014 at 5:53 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 I think that if we have 3 blueprints that realises some SSL stuff around
 themselves then we can discuss it here.
 My vision about SSL in Fuel split into 3 parts:

 A) We need to implement [1] blueprint, cause it is only one way to
 generate certificates.
 How i see that:
 1.0 We sync puppet-openssl from upstream, adapt it for Fuel tasks.
 1.1 We create docker container (we already have many, so containerized
 CA should work well) with OpenSSL and puppet manifests in it.
 1.2 When container will start first time, it will create CA that will
 store on master node.

 Our workitems here is:
 - Create docker container
 - Sync upstream puppet-openssl and adapt it for Fuel
 - Write code to create CA


First of all I think this blueprint should be submitted to fuel-specs ;)
How do you see the exchanges between the CA container and the various
services? For instance, how would nailgun asks for a signed certificate and
get back the result?



 B) We need to implement [2] blueprint. How I see that:
 1.3 When CA container start first time and creates CA, then it will
 check for keypair for master node (Fuel UI). If that keypair will not
 found, then CA create it, change nginx conffile properly and restart nginx
 on master node.


I find awkward to have the CA container restarting nginx...


 Our workitems here is:
 - Write code to check if we already have generated certificate and
 generate new if we have not.

 C) Then we need to implement [3] blueprint
 For next step we have 2 options:
   First:
 1.3 When we create new cluster then we know all information to create
 new keypair(s). When user press Deploy changes button, then we will
 create new keypair(s).
 Q: Main question here is - how many keypairs we will create? One for every
 service or one for all?


As a first implementation, I assume that one certificate for all OSt
services is good enough.


 1.4 We will distribute key(s) with mcollective agent (similar to how
 we sync puppet manifests from master node to other nodes). After that
 private key(s) will deleted from master node.


How will it work if the user modifies the configuration of the environment
afterwards? Say he/she adds one controller node, how will it be able to
copy the private key to the new node?


 1.5 On nodes puppet will do all work. We need to write some code for
 that
 Pros of that method:
 + It's relative simple, we can create clean and lucid code that
 will be easy for support
 Cons of that method:
 - We need to send every private key over network. We can reduce
 this danger cause we will already have passwordless sync over network
 between master node and other nodes, cause we will generate ssh keys for
 nodes before we will distribute any data at deployment stage.

   Second:
 1.3 When we create new cluster, we do all work same way as we do it
 now, but after provisioning we will create keypair on first node, make csr
 for every service (or for one, if we will create one certificate for all
 services) and send that csr to master node, where it will signed and
 certificate will send back.
 1.4 Puppet will do all work on nodes. We, obviously, need to write
 some code for it. But we need to sync our keys over controllers all the
 same (and now we don't have reliable mechanism to do this)
 Pros of that method:
 + I don't see any
 Cons of that method:
 - Code will be not so obvious
 - To 

Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-04 Thread Simon Pasquier
Hi Salvatore,

On 03/04/2014 14:56, Salvatore Orlando wrote:
 Hi Simon,
 
snip
 
 I hope stricter criteria will be enforced for Juno; I personally think
 every CI should run at least the smoketest suite for L2/L3 services (eg:
 load balancer scenario will stay optional).

I had a little thinking about this and I feel like it might not have
caught _immediately_ the issue Kyle talked about [1].

Let's rewind the time line:
1/ Change to *Nova* adding external events API is merged
https://review.openstack.org/#/c/76388/
2/ Change to *Neutron* notifying Nova when ports are ready is merged
https://review.openstack.org/#/c/75253/
3/ Change to *Nova* making libvirt wait for Neutron notifications is merged
https://review.openstack.org/#/c/74832/

At this point and assuming that the external ODL CI system were running
the L2/L3 smoke tests, change #3 could have passed since external
Neutron CI aren't voting for Nova. Instead it would have voted against
any subsequent change to Neutron.

Simon

[1] https://bugs.launchpad.net/neutron/+bug/1301449

 
 Salvatore
 
 [1] https://review.openstack.org/#/c/75304/
 
 
 
 On 3 April 2014 12:28, Simon Pasquier simon.pasqu...@bull.net
 mailto:simon.pasqu...@bull.net wrote:
 
 Hi,
 
 I'm looking at [1] but I see no requirement of which Tempest tests
 should be executed.
 
 In particular, I'm a bit puzzled that it is not mandatory to boot an
 instance and check that it gets connected to the network. To me, this is
 the very minimum for asserting that your plugin or driver is working
 with Neutron *and* Nova (I'm not even talking about security groups). I
 had a quick look at the existing 3rd party CI systems and I found none
 running this kind of check (correct me if I'm wrong).
 
 Thoughts?
 
 [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
 --
 Simon Pasquier
 Software Engineer (OpenStack Expertise Center)
 Bull, Architect of an Open World
 Phone: + 33 4 76 29 71 49 tel:%2B%2033%204%2076%2029%2071%2049
 http://www.bull.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-03 Thread Simon Pasquier
Hi,

I'm looking at [1] but I see no requirement of which Tempest tests
should be executed.

In particular, I'm a bit puzzled that it is not mandatory to boot an
instance and check that it gets connected to the network. To me, this is
the very minimum for asserting that your plugin or driver is working
with Neutron *and* Nova (I'm not even talking about security groups). I
had a quick look at the existing 3rd party CI systems and I found none
running this kind of check (correct me if I'm wrong).

Thoughts?

[1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
-- 
Simon Pasquier
Software Engineer (OpenStack Expertise Center)
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-03 Thread Simon Pasquier
Thanks Salvatore and Kyle for your feedback.

Kyle, you're right, my question has been kicked off by the ML2 ODL bug.
I didn't want to point fingers but rather understand the mid/long-term
plan for 3rd party testing. I'm happy to see that this is taken into
account and hopefully the Juno cycle will provide time to implement the
correct level of testing.

Regards,

Simon

On 03/04/2014 15:26, Kyle Mestery wrote:
 I agree 100% on this in fact. One of the other concerns I have with
 the existing 3rd party
 CI systems is that, other than the audit review Salvatore mentions,
 who is ensuring
 they continue to run ok? Once they've been given voting rights, is
 anyone auditing these
 to ensure they continue to function ok?
 
 I suspect also that Simon is referring to the ODL ML2 MechanismDriver,
 which was broken
 with this commit [1] pushed in at the very end of Icehouse, and in
 fact is still broken unless
 you use the wonky workaround of telling Nova that VIF plugging isn't
 fatal and give it a timeout
 to wait. Better CI for ODL would have caught this, but I'm still
 somewhat saddened this was
 merged so late because now ODL is broken by default and the work to
 fix this is turning out
 to be more challenging than initially thought. :(
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/75253/
 
 On Thu, Apr 3, 2014 at 7:56 AM, Salvatore Orlando sorla...@nicira.com wrote:
 Hi Simon,

 I agree with your concern.
 Let me point out however that VMware mine sweeper runs almost all the smoke
 suite.
 It's been down a few days for an internal software upgrade, so perhaps you
 have not seen any recent report from it.

 I've seen some CI systems testing as little as tempest.api.network.
 Since a criterion on the minimum set of tests to run was not defined prior
 to the release cycle, it was also not ok to enforce it once the system went
 live.
 The only thing active at the moment is a sort of purpose built lie detector
 [1].

 I hope stricter criteria will be enforced for Juno; I personally think every
 CI should run at least the smoketest suite for L2/L3 services (eg: load
 balancer scenario will stay optional).

 Salvatore

 [1] https://review.openstack.org/#/c/75304/



 On 3 April 2014 12:28, Simon Pasquier simon.pasqu...@bull.net wrote:

 Hi,

 I'm looking at [1] but I see no requirement of which Tempest tests
 should be executed.

 In particular, I'm a bit puzzled that it is not mandatory to boot an
 instance and check that it gets connected to the network. To me, this is
 the very minimum for asserting that your plugin or driver is working
 with Neutron *and* Nova (I'm not even talking about security groups). I
 had a quick look at the existing 3rd party CI systems and I found none
 running this kind of check (correct me if I'm wrong).

 Thoughts?

 [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
 --
 Simon Pasquier
 Software Engineer (OpenStack Expertise Center)
 Bull, Architect of an Open World
 Phone: + 33 4 76 29 71 49
 http://www.bull.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] libvirt+Xen+OVS VLAN networking in icehouse

2014-03-14 Thread Simon Pasquier
Hi,

I've played a little with XenAPI + OVS. You might be interested by this
bug report [1] that describes a related problem I've seen in this
configuration. I'm not sure about Xen libvirt though. My assumption is
that the future-proof solution for using Xen with OpenStack is the
XenAPI driver but someone from Citrix (Bob?) may confirm.

Note also that the security groups are currently broken with libvirt +
OVS. As you noted, the iptables rules are applied directly to the OVS
port thus they are not effective (see [2] for details). There's work in
progress [3][4] to fix this critical issue. As far as the XenAPI driver
is concerned, there is another bug [5] tracking the lack of support for
security groups which should be addressed by the OVS firewall driver [6].

HTH,

Simon

[1] https://bugs.launchpad.net/neutron/+bug/1268955
[2] https://bugs.launchpad.net/nova/+bug/1112912
[3] https://review.openstack.org/21946
[4] https://review.openstack.org/44596
[5] https://bugs.launchpad.net/neutron/+bug/1245809
[6] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

On 13/03/2014 19:35, iain macdonnell wrote:
 I've been playing with an icehouse build grabbed from fedorapeople. My
 hypervisor platform is libvirt-xen, which I understand may be
 deprecated for icehouse(?) but I'm stuck with it for now, and I'm
 using VLAN networking. It almost works, but I have a problem with
 networking. In havana, the VIF gets placed on a legacy ethernet
 bridge, and a veth pair connects that to the OVS integration bridge.
 In understand that this was done to enable iptables filtering at the
 VIF. In icehouse, the VIF appears to get placed directly on the
 integration bridge - i.e. the libvirt XML includes something like:
 
 interface type='bridge'
   mac address='fa:16:3e:e7:1e:c3'/
   source bridge='br-int'/
   script path='vif-bridge'/
   target dev='tap43b9d367-32'/
 /interface
 
 
 The problem is that the port on br-int does not have the VLAN tag.
 i.e. I'll see something like:
 
 Bridge br-int
 Port tap43b9d367-32
 Interface tap43b9d367-32
 Port qr-cac87198-df
 tag: 1
 Interface qr-cac87198-df
 type: internal
 Port int-br-bond0
 Interface int-br-bond0
 Port br-int
 Interface br-int
 type: internal
 Port tapb8096c18-cf
 tag: 1
 Interface tapb8096c18-cf
 type: internal
 
 
 If I manually set the tag using 'ovs-vsctl set port tap43b9d367-32
 tag=1', traffic starts flowing where it needs to.
 
 I've traced this back a bit through the agent code, and find that the
 bridge port is ignored by the agent because it does not have any
 external_ids (observed with 'ovs-vsctl list Interface'), and so the
 update process that normally sets the tag is not invoked. It appears
 that Xen is adding the port to the bridge, but nothing is updating it
 with the neutron-specific external_ids that the agent expects to
 see.
 
 Before I dig any further, I thought I'd ask; is this stuff supposed to
 work at this point? Is it intentional that the VIF is getting placed
 directly on the integration bridge now? Might I be missing something
 in my configuration?
 
 FWIW, I've tried the ML2 plugin as well as the legacy OVS one, with
 the same result.
 
 TIA,
 
 ~iain
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] libvirt default log level

2014-01-15 Thread Simon Pasquier

+1 for your change. I've been hit by the very same issue today.

Simon

On 15/01/2014 17:56, Steven Dake wrote:

Hi,

Ken'ichi Omichi submitted a change [1] in devstack to change the default
log level to 1 for libvirt.  This results in continual spam to
/var/log/messages in my development system, even after exiting
devstack.  The spam looks like:

Jan 14 08:13:49 bigiron libvirtd: 2014-01-14 15:13:49.334+: 1480:
debug : virFileClose:90 : Closed fd 8
Jan 14 08:13:49 bigiron libvirtd: 2014-01-14 15:13:49.334+: 1480:
debug : virFileClose:90 : Closed fd 9
Jan 14 08:13:49 bigiron libvirtd: 2014-01-14 15:13:49.334+: 1480:
debug : virFileClose:90 : Closed fd 10
Jan 14 08:13:49 bigiron libvirtd: 2014-01-14 15:13:49.334+: 1480:
debug : virFileClose:90 : Closed fd 11

in a continual loop

I submitted a change [2] that sets the default to level 2 (info +
warnings + errors) which was -1'ed by Sean Dague.  His suggestion was to
take the discussion upstream so consensus around what the default should
be can be made so this doesn't end up getting changed every week.

The core mission of devstack is to provide a development environment for
developers.  The fact that it is being used in the gate seems somewhat
ancillary to it's mission, and forcing a default of spam the system
logs with tons of libvirt messages seems counter to the core mission of
devstack.  As is, without modification devstack makes looking at
anything useful in my system logs impossible without grep -v libvirt and
is intrusive to developer's workstations.

[1] https://review.openstack.org/#/c/63992/
https://review.openstack.org/#/c/63992/
[2] https://review.openstack.org/#/c/66630/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Neutron] Security groups issue when running latest libvirt?

2013-11-07 Thread Simon Pasquier

Le 07/11/2013 03:18, Martinx - ジェームズ a écrit :

That is true... Back to LibvirtHybridOVSBridgeDriver, Security Groups
is working again...


Thanks for the feedback Thiago. I've opened a bug on Launchpad:
https://bugs.launchpad.net/nova/+bug/1248859



On 6 November 2013 15:03, Simon Pasquier simon.pasqu...@bull.net
mailto:simon.pasqu...@bull.net wrote:

Answering myself as I investigated a little further and
cross-posting to openstack-dev because I'd like to get feedback from
Nova/Neutron devs.

Users running Havana should configure
libvirt_vif_driver=nova.virt.__libvirt.vif.__LibvirtHybridOVSBridgeDriver.
This driver is still available in the Havana release although
deprecated. AFAIU, this is the only option if you want effective
security groups with KVM  OVS.

For people using the master branch of nova, sorry but security
groups are currently broken because LibvirtHybridOVSBridgeDriver is
gone ([0]). Joe Gordon asked the Neutron devs about it few weeks ago
[1] but no answer and in another review [2], the conclusion was that
the Tempest tests passed with Neutron. However I don't see anywhere
in the tests ([3], [4]) that we check if the security rules
allow/block traffic.

It would be nice if core devs could confirm or refute.

Regards,

Simon

[0] https://review.openstack.org/#__/c/49660/
https://review.openstack.org/#/c/49660/
[1]

http://lists.openstack.org/__pipermail/openstack-dev/2013-__October/016886.html

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016886.html
[2] https://review.openstack.org/#__/c/44349
https://review.openstack.org/#/c/44349
[3]

https://github.com/openstack/__tempest/blob/master/tempest/__api/network/test_security___groups.py

https://github.com/openstack/tempest/blob/master/tempest/api/network/test_security_groups.py
[4]

https://github.com/openstack/__tempest/blob/master/tempest/__api/network/test_security___groups_negative.py

https://github.com/openstack/tempest/blob/master/tempest/api/network/test_security_groups_negative.py

Le 05/11/2013 14:57, Simon Pasquier a écrit :

Hi all,

I'm struggling with security groups on Havana with Neutron and OVS
plugin (GRE tunnels). No problem to create/delete security group
rules
but even though iptables configuration is updated, traffic to my
instances is never filtered [0].

I'm running DevStack on 2 nodes (1 controller + 1 compute):
- OS: Ubuntu 12.04.3 (LTS) with the Havana cloud archive repository.
- Open vSwitch package version: 1.10.2-0ubuntu2~cloud0
- libvirt package version: 1.1.1-0ubuntu8~cloud2
- localrc, nova.conf, neutron.conf and ovs_neutron_plugin.ini files
pasted at [1] (I didn't modify any of these files after the
DevStack run)

According to [2], [3] and [4], iptables is not compatible with TAP
devices connectd directly to Open vSwitch ports, this is why
there used
to be the additional veth + bridge interfaces [5]. But in my
setup, this
is not the case anymore as shown in [6] ('ovs-vsctl show' +
'iptables-save' ouptut). I've also pasted the libvirt XML
configuration
[7] that shows that the instance is directly connected to the
Open vSwitch.

Are the security groups supposed to work when the instance is
directly
connected to OVS? If yes, what am I doing wrong?

Regards,

[0] http://paste.openstack.org/__show/50490/
http://paste.openstack.org/show/50490/
[1] http://paste.openstack.org/__show/50448/
http://paste.openstack.org/show/50448/
[2]
http://www.spinics.net/linux/__fedora/libvirt-users/msg05384.__html
http://www.spinics.net/linux/fedora/libvirt-users/msg05384.html
[3]
http://openvswitch.org/__pipermail/discuss/2013-__October/011461.html
http://openvswitch.org/pipermail/discuss/2013-October/011461.html
[4]

http://docs.openstack.org/__havana/config-reference/__content/under_the_hood___openvswitch.html

http://docs.openstack.org/havana/config-reference/content/under_the_hood_openvswitch.html

[5]

http://docs.openstack.org/__havana/config-reference/__content/figures/7/a/a/common/__figures/under-the-hood-__scenario-2-ovs-compute.png

http://docs.openstack.org/havana/config-reference/content/figures/7/a/a/common/figures/under-the-hood-scenario-2-ovs-compute.png

[6] http://paste.openstack.org/__show/50486/
http://paste.openstack.org/show/50486/
[7] http://paste.openstack.org/__show/50487/
http://paste.openstack.org/show/50487/



--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49 tel:%2B%2033%204%2076%2029%2071%2049
http://www.bull.com

Re: [openstack-dev] [Heat] How do i implement this usecase:

2013-11-07 Thread Simon Pasquier

Hi,

The OS::Neutron::FloatingIP resource works in the same manner as the 
'neutron floating-create' command so currently there is no way to avoid 
passing the floating network id. The AWS::EC2::EIP resource doesn't 
require it because it uses the Nova API to allocate floating IPs. In 
turn, Nova API knowns the floating network with the 
default_floating_pool parameter defined in your nova.conf file.


I guess what you are looking for is a OS::Nova::FloatingIP resource. As 
a workaround, you could leverage environments [1] and map to the EC2 EIP 
resource:


resource_registry:
  OS::Nova::FloatingIP: AWS::EC2::EIP

Simon

[1] http://docs.openstack.org/developer/heat/template_guide/environment.html

Le 07/11/2013 09:33, Nilakhya a écrit :

Currently my heat template works with AWS resource type:

   DatabaseIPAddress:
 Type: AWS::EC2::EIP
   DatabaseIPAssoc :
 Type: AWS::EC2::EIPAssociation
 Properties:
   InstanceId: {Ref: BaseInstance}
   EIP: {Ref: MyIPAddress}

Now if i want to change to OpenStack ( OS ) namespace with a similar
implementation :

   MyIPAddress:
 Type: OS::Neutron::FloatingIP
Properties:
floating_network_id : String
   MyIPAssoc :
 Type: OS::Neutron::FloatingIPAssociation
 Properties:
   floatingip_id : {Ref: MyIPAddress}

I cannot problem is,

a) floating_network_id ( is not known ) which is a required property.
b) Even if its available / defaults to, its an overhead from AWS simplicity.


--
Consultant Engineering
Team: HPCS-Vertica
Location: Noida, India




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Neutron] Security groups issue when running latest libvirt?

2013-11-06 Thread Simon Pasquier
Answering myself as I investigated a little further and cross-posting to 
openstack-dev because I'd like to get feedback from Nova/Neutron devs.


Users running Havana should configure 
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver.
This driver is still available in the Havana release although 
deprecated. AFAIU, this is the only option if you want effective 
security groups with KVM  OVS.


For people using the master branch of nova, sorry but security groups 
are currently broken because LibvirtHybridOVSBridgeDriver is gone ([0]). 
Joe Gordon asked the Neutron devs about it few weeks ago [1] but no 
answer and in another review [2], the conclusion was that the Tempest 
tests passed with Neutron. However I don't see anywhere in the tests 
([3], [4]) that we check if the security rules allow/block traffic.


It would be nice if core devs could confirm or refute.

Regards,

Simon

[0] https://review.openstack.org/#/c/49660/
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016886.html

[2] https://review.openstack.org/#/c/44349
[3] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/test_security_groups.py
[4] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/test_security_groups_negative.py


Le 05/11/2013 14:57, Simon Pasquier a écrit :

Hi all,

I'm struggling with security groups on Havana with Neutron and OVS
plugin (GRE tunnels). No problem to create/delete security group rules
but even though iptables configuration is updated, traffic to my
instances is never filtered [0].

I'm running DevStack on 2 nodes (1 controller + 1 compute):
- OS: Ubuntu 12.04.3 (LTS) with the Havana cloud archive repository.
- Open vSwitch package version: 1.10.2-0ubuntu2~cloud0
- libvirt package version: 1.1.1-0ubuntu8~cloud2
- localrc, nova.conf, neutron.conf and ovs_neutron_plugin.ini files
pasted at [1] (I didn't modify any of these files after the DevStack run)

According to [2], [3] and [4], iptables is not compatible with TAP
devices connectd directly to Open vSwitch ports, this is why there used
to be the additional veth + bridge interfaces [5]. But in my setup, this
is not the case anymore as shown in [6] ('ovs-vsctl show' +
'iptables-save' ouptut). I've also pasted the libvirt XML configuration
[7] that shows that the instance is directly connected to the Open vSwitch.

Are the security groups supposed to work when the instance is directly
connected to OVS? If yes, what am I doing wrong?

Regards,

[0] http://paste.openstack.org/show/50490/
[1] http://paste.openstack.org/show/50448/
[2] http://www.spinics.net/linux/fedora/libvirt-users/msg05384.html
[3] http://openvswitch.org/pipermail/discuss/2013-October/011461.html
[4]
http://docs.openstack.org/havana/config-reference/content/under_the_hood_openvswitch.html

[5]
http://docs.openstack.org/havana/config-reference/content/figures/7/a/a/common/figures/under-the-hood-scenario-2-ovs-compute.png

[6] http://paste.openstack.org/show/50486/
[7] http://paste.openstack.org/show/50487/



--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security groups with XenAPI

2013-10-29 Thread Simon Pasquier

Hi Bob,

Thanks for the reply.

Le 28/10/2013 17:47, Bob Ball a écrit :

Hi Simon,

Yes, I believe you are right.

We were already planning to discuss this very topic at the XenAPI roadmap 
session at the summit.  Hopefully someone will take on tying up this loose end 
there.

Security group support is the only thing we are aware of that is missing from 
the XenAPI neutron integration.

Thanks for raising it - a bug report would be useful to track it!


Done = https://bugs.launchpad.net/neutron/+bug/1245809



Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Security groups with XenAPI

2013-10-28 Thread Simon Pasquier

Hi all,

I'm trying to use the Nova XenAPI driver with Neutron (Open vSwitch with 
VLAN). After many attempts, I managed to make it work using the 
NoopFirewallDriver firewall_driver for security groups (which means, 
well, no security). With the OVSHybridIptablesFirewallDriver driver, the 
OVS agent running on the compute node won't configure the flows on the 
OVS ports.


I noticed that the XenAPI plugin [1] doesn't manage standard input which 
seems to be a blocker for running the iptables-save and iptables-restore 
commands [2]. Some work has been done in the past for nova-network [3] 
and I guess that something similar should be implemented for Neutron.


Am I right? If yes, I'd be happy to open a bug (or blueprint?).

Best regards,

[1] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/openvswitch/agent/xenapi/etc/xapi.d/plugins/netwrap
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_manager.py#L346

[3] https://review.openstack.org/#/c/2071

--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-10-03 Thread Simon Pasquier

Hi Christopher,

Thanks for replying! I've been out last week hence this late email.

Le 20/09/2013 21:22, Christopher Armstrong a écrit :

Hello Simon! I've put responses below.

I'm kind of confused about your examples though, because you don't show
anything that depends on ComputeReady in your template. I guess I can
imagine some scenarios, but it's not very clear to me how this works.
It'd be nice to make sure the new autoscaling solution that we're
working on will support your case in a nice way, but I think we need
some more information about what you're doing. The only time this would
have an effect is if there's another resource depending on the
ComputeReady /that's also being updated at the same time/, because the
only effect that a dependency has is to wait until it is met before
performing create, update, or delete operations on other resources. So I
think it would be nice to understand your use case a little bit more
before continuing discussion.


I'm not sure I understand which template you're talking about: is it [1] 
or [2]?
In both cases, nothing depends on ComputeReady: this is the guard 
condition and it is the last resource being created. And since it 
depends on the NumberOfComputes or NumberOfWaitConditions parameter, it 
gets updated when I update one of these.


[1] http://paste.openstack.org/show/47142/
[2] http://paste.openstack.org/show/47148/

--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-10-03 Thread Simon Pasquier

Hi Clint,

Thanks for the reply! I'll update the bug you raised with more 
information. In the meantime, I agree with you that cfn-hup is enough 
for now.


BTW, is there any bug or missing feature that would prevent me from 
replacing cfn-hup by os-collect-config?


Simon

Le 20/09/2013 22:12, Clint Byrum a écrit :

Excerpts from Simon Pasquier's message of 2013-09-17 05:57:58 -0700:

Hello,

I'm testing stack updates with instance group and wait conditions and
I'd like to get feedback from the Heat community.

My template declares an instance group resource with size = N and a wait
condition resource with count = N (N being passed as a parameter of the
template). Each group's instance is calling cfn-signal (with a different
id!) at the end of the user data script and my stack creates with no error.

Now when I update my stack to run N+X instances, the instance group gets
updated with size=N+X but since the wait condition is deleted and
recreated, the count value should either be updated to X or my existing
instances should re-execute cfn-signal.


That is a bug, the count should be something that can be updated in-place.

https://bugs.launchpad.net/heat/+bug/1228362

Once that is fixed, there will be an odd interaction between the groups
though. Any new instances will add to the count, but removed instances
will not decrease it. I'm not sure how to deal with that particular quirk.

That said, rolling updates will likely produce some changes to the way
updates interact with wait conditions so that we can let instances and/or
monitoring systems feed back when an instance is ready. That will also
help deal with the problem you are seeing.

In the mean time, cfn-hup is exactly what you want, and I see no problem
with re-running cfn-signal after an update to signal that the update
has applied.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VM Ensembles

2013-09-20 Thread Simon Pasquier

Le 20/09/2013 11:06, Rodrigo Alejandre Prada a écrit :

Hello experts,

Is anybody aware if 'VM Ensembles' feature
(https://blueprints.launchpad.net/nova/+spec/vm-ensembles) will be
finally included in Havana release? According project information it's
on Approved state but no milestone-related info.


VM ensembles depends on the instance grouping API [1] which didn't make 
it for Havana [2].


[1] https://wiki.openstack.org/wiki/InstanceGroupApiExtension
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/014732.html


Cheers,



Thanks in advance for your feedback.

Cheers,
Rodrigo A.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-09-17 Thread Simon Pasquier

Hello,

I'm testing stack updates with instance group and wait conditions and 
I'd like to get feedback from the Heat community.


My template declares an instance group resource with size = N and a wait 
condition resource with count = N (N being passed as a parameter of the 
template). Each group's instance is calling cfn-signal (with a different 
id!) at the end of the user data script and my stack creates with no error.


Now when I update my stack to run N+X instances, the instance group gets 
updated with size=N+X but since the wait condition is deleted and 
recreated, the count value should either be updated to X or my existing 
instances should re-execute cfn-signal.


To cope with this situation, I've found 2 options:
1/ declare 2 parameters in my template: nb of instances (N for creation, 
N+X for update) and count of wait conditions (N for creation, X for 
update). See [1] for the details.
2/ declare only one parameter in my template (the size of the group) and 
leverage cfn-hup on the existing instances to re-execute cfn-signal. See 
[2] for the details.


The solution 1 is not really user-friendly and I found that solution 2 
is a bit complicated. Does anybody know a simpler way to achieve the 
same result?


Regards,

[1] http://paste.openstack.org/show/47142/
[2] http://paste.openstack.org/show/47148/
--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-06 Thread Simon Pasquier

Gary (or others), did you have some time to look at my issue?
FYI, I opened a bug [1] on Launchpad. I'll update it with the outcome of 
this discussion.

Cheers,
Simon

[1] https://bugs.launchpad.net/nova/+bug/1218878

Le 03/09/2013 15:54, Simon Pasquier a écrit :

I've done a wrong copypaste, see correction inline.

Le 03/09/2013 12:34, Simon Pasquier a écrit :

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these
filters is inaccurate?

My test environment has 2 compute nodes: compute1 and compute3. First, I
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup

So far so good, everything's active:
$ nova list
+--+-+++-+--+


| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
+--+-+++-+--+



Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup


The command is:

$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --hint group=foo vm1-foo


$ nova list
+--+-+++-+--+


| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  |
None   | NOSTATE |  |
+--+-+++-+--+



I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
will see, the log message is there but it looks like group_hosts() [3]
is returning all my hosts instead of only the ones that run instances
from the group.

Concerning GroupAffinityFilter, I understood that it couldn't work
simultaneously with GroupAntiAffinityFilter but since I missed the
multiple schedulers, I couldn't figure out how it would be useful. So I
got it now.

Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3]
https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L137


Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start
with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and agreed
on a more formal approach to deal with this and we proposed and
developed
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-api-e


xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
At the moment the following are still in review and I hope that we will
make the feature freeze deadline:
Api support:
https://review.openstack.org/#/c/30028/

Scheduler support:
https://review.openstack.org/#/c/33956/

Client support:
https://review.openstack.org/#/c/32904/

In order to make use of the above you need to add
GroupAntiAffinityFilter
to the filters that will be active (this is not one of the default
filters). When you deploy the first instance of a group you need to
specify that it is part of the group. This information is used for
additional VM's that are being deployed.

Can you please provide some extra details so that I can help you debug
the
issues that you have encountered (I did not encounter the problems that
you have described):
1. Please provide the commands that you used with the deploying of the
instance
2. Please provide the nova configuration file
3. Can you please look at the debug traces and see if you see the log
message on line 97
(https://review.openstack.org/#/c/21070/8/nova/scheduler/filters/affinity_f


ilter.py

Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-06 Thread Simon Pasquier

Thanks for the answer.
I already posted the links in my previous email but here they are again:
* nova.conf = http://paste.openstack.org/show/45671/
* scheduler logs = http://paste.openstack.org/show/45672/

Just to re-iterate, my setup consists of 2 compute nodes which already 
run instances not in any group. You'll see in the logs that the 
group_hosts list passed to the filter contains the 2 nodes while *no* 
instance has been booted in that group yet.


Cheers,

Simon

Le 06/09/2013 14:18, Gary Kotton a écrit :

Hi,
Sorry for the delayed response (it is new years my side of the world and
have some family obligations).
Would it be possible that you please provide the nova configuration file
(I would like to see if you have the group anti affinity filter in your
filter list), and if this exists to at least see a trace that the filter
has been invoked.
I have tested this with the patches that I mentioned below and it works. I
will invest some time on this on Sunday to make sure that it is all
working with the latest code.
Thanks
Gary

On 9/6/13 10:31 AM, Simon Pasquier simon.pasqu...@bull.net wrote:


Gary (or others), did you have some time to look at my issue?
FYI, I opened a bug [1] on Launchpad. I'll update it with the outcome of
this discussion.
Cheers,
Simon

[1] https://bugs.launchpad.net/nova/+bug/1218878

Le 03/09/2013 15:54, Simon Pasquier a écrit :

I've done a wrong copypaste, see correction inline.

Le 03/09/2013 12:34, Simon Pasquier a écrit :

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these
filters is inaccurate?

My test environment has 2 compute nodes: compute1 and compute3. First,
I
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup

So far so good, everything's active:
$ nova list

+--+-++-
---+-+--+


| ID   | Name| Status |
Task State | Power State | Networks |

+--+-++-
---+-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |

+--+-++-
---+-+--+



Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup


The command is:

$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --hint group=foo vm1-foo


$ nova list

+--+-++-
---+-+--+


| ID   | Name| Status |
Task State | Power State | Networks |

+--+-++-
---+-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  |
None   | NOSTATE |  |

+--+-++-
---+-+--+



I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
will see, the log message is there but it looks like group_hosts() [3]
is returning all my hosts instead of only the ones that run instances
from the group.

Concerning GroupAffinityFilter, I understood that it couldn't work
simultaneously with GroupAntiAffinityFilter but since I missed the
multiple schedulers, I couldn't figure out how it would be useful. So I
got it now.

Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3]

https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L
137


Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start
with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and
agreed
on a more formal approach to deal with this and we proposed and
developed

https://blueprints.launchpad.net/openstack/?searchtext=instance-group

[openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-03 Thread Simon Pasquier

Reposting to openstack-dev as I got no answer on the general mailing list.


 Message original 
Sujet: [Openstack] Confused about GroupAntiAffinityFilter and 
GroupAffinityFilter

Date : Mon, 2 Sep 2013 11:19:58 +0200
De : Simon Pasquier simon.pasqu...@bull.net
Organisation : Bulll SAS
Pour : openst...@lists.openstack.org

Hello,

I tried to play with GroupAntiAffinityFilter and GroupAffinityFilter
filters but it looks like the documentation is misleading [1]. Looking
more precisely at the commits that introduced these filters [2][3], my
assumption is that to use these filters, one would boot a first instance
with '--hint group=foo' and the scheduler would update the
instance_system_metadata table with {key:'group',value:'foo}. Then when
starting other instances with the same hint option, the scheduler would
filter the candidate hosts by querying the instance_system_metadata table.

Still this doesn't work for me. In my tests with
GroupAntiAffinityFilter, I have 3 compute nodes, each running one
instance not in any group. Then when I launch a VM specifying a group
hint, the scheduler fails to find a valid host because
GroupAntiAffinityFilter filter returns 0 host.

Could someone provide some guidance on how to use this filter?

Regards,

[1]
http://docs.openstack.org/trunk/openstack-compute/admin/content/scheduler-filters.html#groupaffinityfilter
[2] https://review.openstack.org/#/c/21070/
[3] https://review.openstack.org/#/c/35788/

--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-03 Thread Simon Pasquier

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these 
filters is inaccurate?


My test environment has 2 compute nodes: compute1 and compute3. First, I 
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --availability-zone nova:compute3 vm-compute3-nogroup


So far so good, everything's active:
$ nova list
+--+-+++-+--+
| ID   | Name| Status | 
Task State | Power State | Networks |

+--+-+++-+--+
| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | 
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | 
None   | Running | private=10.0.0.4 |

+--+-+++-+--+

Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --availability-zone nova:compute3 vm-compute3-nogroup

$ nova list
+--+-+++-+--+
| ID   | Name| Status | 
Task State | Power State | Networks |

+--+-+++-+--+
| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | 
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | 
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  | 
None   | NOSTATE |  |

+--+-+++-+--+

I've pasted the scheduler logs [1] and my nova.conf file [2]. As you 
will see, the log message is there but it looks like group_hosts() [3] 
is returning all my hosts instead of only the ones that run instances 
from the group.


Concerning GroupAffinityFilter, I understood that it couldn't work 
simultaneously with GroupAntiAffinityFilter but since I missed the 
multiple schedulers, I couldn't figure out how it would be useful. So I 
got it now.


Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3] 
https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L137


Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and agreed
on a more formal approach to deal with this and we proposed and developed
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-api-e
xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
At the moment the following are still in review and I hope that we will
make the feature freeze deadline:
Api support:
https://review.openstack.org/#/c/30028/

Scheduler support:
https://review.openstack.org/#/c/33956/

Client support:
https://review.openstack.org/#/c/32904/

In order to make use of the above you need to add GroupAntiAffinityFilter
to the filters that will be active (this is not one of the default
filters). When you deploy the first instance of a group you need to
specify that it is part of the group. This information is used for
additional VM's that are being deployed.

Can you please provide some extra details so that I can help you debug the
issues that you have encountered (I did not encounter the problems that
you have described):
1. Please provide the commands that you used with the deploying of the
instance
2. Please provide the nova configuration file
3. Can you please look at the debug traces and see if you see the log
message on line 97
(https://review.openstack.org/#/c/21070/8/nova/scheduler/filters/affinity_f
ilter.py)

Now regarding the AffinityFilter. At this stage this does not work with
the AntiAffinity filter. We were banking on this being used with the
multiple scheduler policies (https://review.openstack.org/#/c/37407/)

Thanks
Gary



On 9/3/13 10:16 AM, Simon Pasquier simon.pasqu...@bull.net wrote:


Reposting to openstack-dev as I got no answer on the general mailing list.


 Message original 
Sujet: [Openstack] Confused about GroupAntiAffinityFilter

Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-03 Thread Simon Pasquier

I've done a wrong copypaste, see correction inline.

Le 03/09/2013 12:34, Simon Pasquier a écrit :

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these
filters is inaccurate?

My test environment has 2 compute nodes: compute1 and compute3. First, I
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup

So far so good, everything's active:
$ nova list
+--+-+++-+--+

| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+

| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
+--+-+++-+--+


Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup


The command is:

$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --hint group=foo vm1-foo



$ nova list
+--+-+++-+--+

| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+

| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  |
None   | NOSTATE |  |
+--+-+++-+--+


I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
will see, the log message is there but it looks like group_hosts() [3]
is returning all my hosts instead of only the ones that run instances
from the group.

Concerning GroupAffinityFilter, I understood that it couldn't work
simultaneously with GroupAntiAffinityFilter but since I missed the
multiple schedulers, I couldn't figure out how it would be useful. So I
got it now.

Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3]
https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L137

Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and agreed
on a more formal approach to deal with this and we proposed and developed
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-api-e

xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
At the moment the following are still in review and I hope that we will
make the feature freeze deadline:
Api support:
https://review.openstack.org/#/c/30028/

Scheduler support:
https://review.openstack.org/#/c/33956/

Client support:
https://review.openstack.org/#/c/32904/

In order to make use of the above you need to add GroupAntiAffinityFilter
to the filters that will be active (this is not one of the default
filters). When you deploy the first instance of a group you need to
specify that it is part of the group. This information is used for
additional VM's that are being deployed.

Can you please provide some extra details so that I can help you debug
the
issues that you have encountered (I did not encounter the problems that
you have described):
1. Please provide the commands that you used with the deploying of the
instance
2. Please provide the nova configuration file
3. Can you please look at the debug traces and see if you see the log
message on line 97
(https://review.openstack.org/#/c/21070/8/nova/scheduler/filters/affinity_f

ilter.py)

Now regarding the AffinityFilter. At this stage this does not work with
the AntiAffinity filter. We were banking on this being used with the
multiple scheduler policies (https://review.openstack.org/#/c/37407/)

Thanks
Gary



On 9/3/13 10:16 AM, Simon Pasquier