Re: [openstack-dev] [ironic] Summit session planning

2016-09-27 Thread Loo, Ruby
Also, as part of this reminder. If you add a proposal to the etherpad, please 
put your name/NIC next to it so we know who added it/is going to lead it. 
Bruno, I added your name to #12 & 13 :)

Thanks,
--ruby

From: Jim Rollenhagen 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, September 27, 2016 at 9:47 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] Summit session planning

Hey friends,

Just a reminder to add your summit session proposals to our etherpad:
https://etherpad.openstack.org/p/ironic-ocata-summit

Unless I hear of an earlier deadline from summit planning folks, I'd like
to have these locked in by October 14 (as I'm out the week before summit).
This means we should try to get them all up this week, start talking about
what we do and don't want in next Monday's meeting, iterate, and make final
decisions in the following meeting (October 10).

Please get things in and start thinking about what we want to accept
by Monday. Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Mirror issues with Intel NFV CI?

2016-09-27 Thread Matt Riedemann
I'm seeing a pretty high failure rate with some of the Intel NFV CI jobs 
today, the pattern looks like a pypi mirror issue getting packages to 
setup tempest:


http://intel-openstack-ci-logs.ovh/94/375894/2/check/tempest-dsvm-intel-nfv-xenial/a0bffb3/logs/devstacklog.txt.gz

2016-09-27 02:11:52.127 | Collecting hacking<0.12,>=0.11.0 (from -r 
/opt/stack/new/tempest/test-requirements.txt (line 4))
2016-09-27 02:12:07.144 |   Retrying (Retry(total=4, connect=None, 
read=None, redirect=None)) after connection broken by 
'ConnectTimeoutError(object at 0x7faca5b7fd10>, 'Connection to proxy.ir.intel.com timed out. 
(connect timeout=15)')': /simple/hacking/
2016-09-27 02:12:22.654 |   Retrying (Retry(total=3, connect=None, 
read=None, redirect=None)) after connection broken by 
'ConnectTimeoutError(object at 0x7faca5b7fe10>, 'Connection to proxy.ir.intel.com timed out. 
(connect timeout=15)')': /simple/hacking/
2016-09-27 02:12:38.657 |   Retrying (Retry(total=2, connect=None, 
read=None, redirect=None)) after connection broken by 
'ConnectTimeoutError(object at 0x7faca5b7ff10>, 'Connection to proxy.ir.intel.com timed out. 
(connect timeout=15)')': /simple/hacking/
2016-09-27 02:12:55.674 |   Retrying (Retry(total=1, connect=None, 
read=None, redirect=None)) after connection broken by 
'ConnectTimeoutError(object at 0x7faca59e9050>, 'Connection to proxy.ir.intel.com timed out. 
(connect timeout=15)')': /simple/hacking/
2016-09-27 02:13:14.682 |   Retrying (Retry(total=0, connect=None, 
read=None, redirect=None)) after connection broken by 
'ConnectTimeoutError(object at 0x7faca59e9150>, 'Connection to proxy.ir.intel.com timed out. 
(connect timeout=15)')': /simple/hacking/
2016-09-27 02:13:29.687 |   Could not find a version that satisfies the 
requirement hacking<0.12,>=0.11.0 (from -r 
/opt/stack/new/tempest/test-requirements.txt (line 4)) (from versions: )
2016-09-27 02:13:29.687 | No matching distribution found for 
hacking<0.12,>=0.11.0 (from -r 
/opt/stack/new/tempest/test-requirements.txt (line 4))


Is this a known issue that the CI maintainers are fixing?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] base node payload for notification

2016-09-27 Thread Mario Villaplana
After some IRC discussion
(http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2016-09-27.log.html#t2016-09-27T13:31:42),
I'm +1 to this base payload, too.

I vote we do this, and we can always update later if operators chime
in with additional use cases that should be put in every node
notification. It would be best to keep this simple for now rather than
adding more complexity to something that many notifications will be
using.

Thanks for the email, Yuriy.

Mario

On Tue, Sep 27, 2016 at 7:00 AM, Yuriy Zveryanskyy
 wrote:
> Hi,
> there is a discussion starting in comment on
> https://review.openstack.org/#/c/321865/
> I agree with Ruby Loo proposal about a base node payload.
>
> Currently we have these node's fields exposed via API (in alphabetical
> order):
>
> "chassis_uuid", "clean_step", "console_enabled", "created_at",  "driver",
> "driver_info", "driver_internal_info", "extra", "inspection_finished_at",
> "inspection_started_at", "instance_info", "instance_uuid", "last_error",
> "maintenance", "maintenance_reason", "name", "network_interface",
> "power_state", "properties", "provision_state", "provision_updated_at",
> "raid_config", "reservation", "resource_class", "target_power_state",
> "target_provision_state", "target_raid_config", "updated_at", "uuid"
>
> In my opinion these field should be excluded from base node payload:
>
> "chassis_uuid": it not represents node state, not changed too often,
> additional
> DB SELECT will be needed for base payload
> "driver_info": it not represents node state, contains only driver settings
> and
> secrets like IPMI passwords
> "driver_internal_info": it's driver internal info
> "instance_info": configdrive blob can be saved inside
> "raid_config": it's hardware related
> "reservation": it's not independent changed fields, only lock flag
> "target_raid_config": it's hardware related
>
> And resulting base payload fields list (for version 1.0):
>
> "clean_step", "console_enabled", "created_at",  "driver", "extra",
> "inspection_finished_at", "inspection_started_at", "instance_uuid",
> "last_error", "maintenance", "maintenance_reason", "name",
> "network_interface", "power_state", "properties", "provision_state",
> "provision_updated_at", "resource_class", "target_power_state",
> "target_provision_state", "updated_at", "uuid"
>
> Any other suggestions are welcome.
>
> Yuriy Zveryanskyy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] Request for old branches removal

2016-09-27 Thread Joshua Hesketh
Hey Emilien,

Sorry I missed those, I didn't take my script back far enough. I've tidied
up as far back as diablo :-).

I could only see the fix_image_version branch on puppet-tempest, but I've
tidied that up anyway.

Let me know if I'm missed anything else.

Cheers,
Josh

On Tue, Sep 27, 2016 at 1:29 PM, Emilien Macchi  wrote:

> Hi Josh,
>
> Thanks a lot for your help !
> I've noticed some of them still have stale branches, example:
> https://github.com/openstack/puppet-keystone/branches with essex and
> folsom
> or https://github.com/openstack/puppet-nova/branches with diablo!
> (yeah very old :-))
> also puppet-cinder, puppet-glance, puppet-horizon, puppet-neutron,
> puppet-swift,
> puppet-tempest (remove fix_image_version branch). So at the end we
> only keep stable/liberty and stable/mitaka.
>
> Could we also get rid of them?
>
> Thanks again,
>
> On Tue, Sep 27, 2016 at 8:05 AM, Joshua Hesketh
>  wrote:
> > Hi Emilien,
> >
> > I've removed all of the old branches on the specified repos and created
> tags
> > in their place. Let me know if there are any problems.
> >
> > Cheers,
> > Josh
> >
> > On Mon, Sep 26, 2016 at 3:51 PM, Emilien Macchi 
> wrote:
> >>
> >> Greatings Infra,
> >>
> >> This is an official request to remove old branches for Puppet OpenStack
> >> modules:
> >>
> >> puppet-ceilometer
> >> puppet-cinder
> >> puppet-glance
> >> puppet-heat
> >> puppet-horizon
> >> puppet-keystone
> >> puppet-neutron
> >> puppet-nova
> >> puppet-openstack_extras
> >> puppet-openstacklib
> >> puppet-swift
> >> puppet-tempest
> >>
> >> Please remove all branches before Kilo (Kilo was already removed).
> >>
> >> Thanks,
> >> --
> >> Emilien Macchi
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Summit session planning

2016-09-27 Thread Bruno Cornec

Hello Jim and ironicers,

Jim Rollenhagen said on Tue, Sep 27, 2016 at 09:47:03AM -0400:

Just a reminder to add your summit session proposals to our etherpad:
https://etherpad.openstack.org/p/ironic-ocata-summit


I have added the 2 topics I've been working on since some time and for 
which I'd like to get help during the Summit.
One is Redfish support in Ironic (I'll have a presentation on it 
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16231/empowering-ironic-with-redfish-support)
the other one is around Ironic standalone (and bifrost), which I use 
to make tests up to now.


I'm still an Ironic beginner somehow, I'm not a good python hacker, but I've
colleagues who are, and I'd really leave the Summit having a good 
understanding of what we need to do, how, and more details on the internals 
of Ironic to do it properly, so that it gets accepted easily.


I'm willing to take the extra time needed to meet with any Ironic community 
member that would be interested and willing to help us, at least 
understanding better the ecosystem. I'll be there late on Monday normally.


Thanks in advance for your supoprt,
Best regards,
Bruno.
--
Open Source Profession, WW Linux Community Lead http://www.hpintelco.net
HPE EMEA EG FLOSS Technology Strategist http://www.hpe.com/engage/opensource
FLOSS projects:http://mondorescue.org http://project-builder.org
Musique ancienne?   http://www.musique-ancienne.org  http://www.medieval.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-de] [DNSaaS] [designate] jenkins is failing for all latest patches

2016-09-27 Thread Hayes, Graham
On 27/09/2016 08:42, Kelam, Koteswara Rao wrote:
> Jenkins for openstack/designate is failing for latest patches. Py27,
> py34 and py35 are failing continuously.
>
> gate-designate-python27-db-ubuntu-xenial
> 
>
>   
>
> FAILURE in 3m 58s
>
> gate-designate-python34-db
> 
>
>   
>
> FAILURE in 5m 16s
>
> gate-designate-python35-db
> 
>
>   
>
> FAILURE in 3m 41s
>
>
>
> Recent patches:
>
> https://review.openstack.org/#/c/376436/
> https://review.openstack.org/#/c/377050/
> https://review.openstack.org/#/c/376170/
>
> etc
>
>
>
> */Regards,/*
>
> */Koteswara/*//
>
>
>

Yeah, I spotted that.

Seems a combination of pecan 1.2 (the py27 tests) and dnspython 1.14
(py3x tests).

Both hit over the weekend.

I am looking at a fix, but the behavior between py27 + py3(4|5) for
dnspython 1.14 is an issue.

I proposed https://review.openstack.org/#/c/377702/ for the pecan issue, 
but it might take a while for the dnspython fixes.

- Graham


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tosca-parser] [heat-translator] [heat] [tacker] [opnfv] Heat-Translator 0.6.0 release

2016-09-27 Thread Sahdev P Zala
Hello Everyone, 

On behalf of the Heat Translator team, I am pleased to announce the 0.6.0 
PyPI release of heat-translator which can be downloaded from 
https://pypi.python.org/pypi/heat-translator

This release includes several enhancements:
Python 3.5 support
Auto deployment of translated templates with proper authentication with 
Keystone and using Heat client to create stack. Also Nova and Glance 
clients, instead of direct REST calls, are now used to query available 
flavors and images in the user environment. This was a needed update on 
initial deployment support where OS_* environment variables were used to 
determine deployment instead of Keystone auth.
Translation support for Senlin cluster and auto scaling policy resources
Translation support for AutoScalingGroup, ScalingPolicy and Aodh resources
Support for TOSCA get_operation_output and concat function translation
New CLI option to provide a desired stack name when auto deploy translated 
template
Handling of Ansible roles used with TOSCA artifacts
Refactoring of shell program to use argparse
Requirement updates
Documentation update and bug fixes etc.

Thanks! 

Regards, 
Sahdev Zala


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] Request for old branches removal

2016-09-27 Thread Emilien Macchi
On Tue, Sep 27, 2016 at 9:49 AM, Joshua Hesketh
 wrote:
> Hey Emilien,
>
> Sorry I missed those, I didn't take my script back far enough. I've tidied
> up as far back as diablo :-).

yeah, old stuff :)

> I could only see the fix_image_version branch on puppet-tempest, but I've
> tidied that up anyway.
>
> Let me know if I'm missed anything else.

It's all perfect now, thanks a lot!

> Cheers,
> Josh
>
> On Tue, Sep 27, 2016 at 1:29 PM, Emilien Macchi  wrote:
>>
>> Hi Josh,
>>
>> Thanks a lot for your help !
>> I've noticed some of them still have stale branches, example:
>> https://github.com/openstack/puppet-keystone/branches with essex and
>> folsom
>> or https://github.com/openstack/puppet-nova/branches with diablo!
>> (yeah very old :-))
>> also puppet-cinder, puppet-glance, puppet-horizon, puppet-neutron,
>> puppet-swift,
>> puppet-tempest (remove fix_image_version branch). So at the end we
>> only keep stable/liberty and stable/mitaka.
>>
>> Could we also get rid of them?
>>
>> Thanks again,
>>
>> On Tue, Sep 27, 2016 at 8:05 AM, Joshua Hesketh
>>  wrote:
>> > Hi Emilien,
>> >
>> > I've removed all of the old branches on the specified repos and created
>> > tags
>> > in their place. Let me know if there are any problems.
>> >
>> > Cheers,
>> > Josh
>> >
>> > On Mon, Sep 26, 2016 at 3:51 PM, Emilien Macchi 
>> > wrote:
>> >>
>> >> Greatings Infra,
>> >>
>> >> This is an official request to remove old branches for Puppet OpenStack
>> >> modules:
>> >>
>> >> puppet-ceilometer
>> >> puppet-cinder
>> >> puppet-glance
>> >> puppet-heat
>> >> puppet-horizon
>> >> puppet-keystone
>> >> puppet-neutron
>> >> puppet-nova
>> >> puppet-openstack_extras
>> >> puppet-openstacklib
>> >> puppet-swift
>> >> puppet-tempest
>> >>
>> >> Please remove all branches before Kilo (Kilo was already removed).
>> >>
>> >> Thanks,
>> >> --
>> >> Emilien Macchi
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] base node payload for notification

2016-09-27 Thread Jim Rollenhagen
On Tue, Sep 27, 2016 at 9:57 AM, Loo, Ruby  wrote:
> Hi Yuriy,
>
>
>
> Thanks for bringing this up. I'm good with your list, with the exception of
> driver_info and instance_info. I'm on the fence with these two. If we assume
> that any secrets will be bleep'd out (configdrives won't be there), is there
> other information there that might be useful? I'm not totally sure what
> notifications will be used for; it is somewhat hard to assume.
>
>
>
> I suppose we could look at it this way, since you and Mario are fine without
> those two. If no one speaks up wanting them, then we'll do as you propose,
> and not expose those two fields.

I'm also on the fence. There's a couple use cases that I think could use this:

1) Building a thing that takes action on notifications - for example,
on a deploy
failure, analyze the error and do a thing (e.g. if BMC is unresponsive, perform
a cold reset). However, this tool could have access to read this data to work
around this.

2) Searching things with searchlight - the obvious case for driver_info is "find
all nodes with BMCs on the 10.100.0.0/24 network" or similar things.

Now that I write these out, seems like driver_info would be more useful than
instance_info, because the latter is more ephemeral.

It is easier to add a thing to notifications than to remove it
(deprecation periods
and so on). So I lean toward not including them now, and adding them if we find
the need.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Summit session planning

2016-09-27 Thread Jim Rollenhagen
Hey friends,

Just a reminder to add your summit session proposals to our etherpad:
https://etherpad.openstack.org/p/ironic-ocata-summit

Unless I hear of an earlier deadline from summit planning folks, I'd like
to have these locked in by October 14 (as I'm out the week before summit).
This means we should try to get them all up this week, start talking about
what we do and don't want in next Monday's meeting, iterate, and make final
decisions in the following meeting (October 10).

Please get things in and start thinking about what we want to accept
by Monday. Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] base node payload for notification

2016-09-27 Thread Loo, Ruby
Hi Yuriy,

Thanks for bringing this up. I'm good with your list, with the exception of 
driver_info and instance_info. I'm on the fence with these two. If we assume 
that any secrets will be bleep'd out (configdrives won't be there), is there 
other information there that might be useful? I'm not totally sure what 
notifications will be used for; it is somewhat hard to assume.

I suppose we could look at it this way, since you and Mario are fine without 
those two. If no one speaks up wanting them, then we'll do as you propose, and 
not expose those two fields.

--ruby


From: Yuriy Zveryanskyy 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, September 27, 2016 at 7:00 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [ironic] base node payload for notification

Hi,
there is a discussion starting in comment on 
https://review.openstack.org/#/c/321865/
I agree with Ruby Loo proposal about a base node payload.
Currently we have these node's fields exposed via API (in alphabetical order):

"chassis_uuid", "clean_step", "console_enabled", "created_at",  "driver",
"driver_info", "driver_internal_info", "extra", "inspection_finished_at",
"inspection_started_at", "instance_info", "instance_uuid", "last_error",
"maintenance", "maintenance_reason", "name", "network_interface",
"power_state", "properties", "provision_state", "provision_updated_at",
"raid_config", "reservation", "resource_class", "target_power_state",
"target_provision_state", "target_raid_config", "updated_at", "uuid"
In my opinion these field should be excluded from base node payload:

"chassis_uuid": it not represents node state, not changed too often, additional
DB SELECT will be needed for base payload
"driver_info": it not represents node state, contains only driver settings and
secrets like IPMI passwords
"driver_internal_info": it's driver internal info
"instance_info": configdrive blob can be saved inside
"raid_config": it's hardware related
"reservation": it's not independent changed fields, only lock flag
"target_raid_config": it's hardware related
And resulting base payload fields list (for version 1.0):

"clean_step", "console_enabled", "created_at",  "driver", "extra",
"inspection_finished_at", "inspection_started_at", "instance_uuid",
"last_error", "maintenance", "maintenance_reason", "name",
"network_interface", "power_state", "properties", "provision_state",
"provision_updated_at", "resource_class", "target_power_state",
"target_provision_state", "updated_at", "uuid"

Any other suggestions are welcome.
Yuriy Zveryanskyy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC] Bug, spec, BP or something else for adding new commands to OSC?

2016-09-27 Thread Richard Theis
Sergey Belous  wrote on 09/26/2016 07:36:17 AM:

> From: Sergey Belous 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 09/26/2016 07:38 AM
> Subject: [openstack-dev] [OSC] Bug, spec, BP or something else for 
> adding new commands to OSC?
> 
> Hello everyone.
> 
> I started working on this blueprint "Implement neutron quota 
> commands" [1] and now there is a patch, that adds quota delete 
> command to python-openstackclient [2] (I will also very appreciate 
> if you will look on it when will have a free time, thanks :)
> 
> But some times ago I had discussion in irc about this blueprint and 
> as I understand, the way when we add some new command (for example, 
> quota delete) only for neutron (networking part of quota management 
> in os-client) is not the best way. For example with quota delete, 
> its better to add support of this command for all parts, that quotas
> management currently implemented in os-client (networking, volume, 
> compute). I think it’s good idea, and the patch mentioned above adds
> quota delete command for all these parts (networking, volume, 
> compute), but… blueprint exist only for neutron quota commands.
> 
> So, my main questions is how to deal with tracking of this work? I 
> mean, should I (or someone else) create a bug or spec or rfe or 
> another blueprint with the same proposals (something like "add quota
> delete command for nova/cinder") and mention it in my patch and in 
> the next patches?

I am okay with using the existing blueprint to track the work.  Feel
free to update the blueprint accordingly.

- Richard

> 
> [1] https://blueprints.launchpad.net/python-openstackclient/+spec/
> neutron-client-quota
> [2] https://review.openstack.org/#/c/376311/
> -- 
> Best Regards,
> Sergey Belous
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Andrew Laski


On Tue, Sep 27, 2016, at 04:43 PM, Matthew Treinish wrote:

> > > 
> > > I definitely can see the value in having machine parsable log stats in
> > > our
> > > artifacts, but I'm not sure where project specific pieces would come
> > > from. But,
> > > given that hypothetical I would say as long as you made those pieces
> > > configurable (like a yaml syntax to search for patterns by log file or
> > > something) and kept a generic framework/tooling for parsing the log stats
> > > I
> > > think it's still a good fit for a QA or Infra project. Especially if you
> > > think
> > > whatever pattern you're planning to use is something other projects would
> > > want
> > > to reuse.
> > 
> > My concern here is that I want to go beyond simple pattern matching. I
> > want to be able to maintain state while parsing to associate log lines
> > with events that came before. The project specific bits I envision are
> > the logic to handle that, but I don't think yaml is expressive enough
> > for it. I came up with a quick example at
> > http://paste.openstack.org/show/583160/ . That's Nova specific and
> > beyond my capability to express in yaml or elastic-recheck.
> 
> That's pretty simple to do with yaml too. Especially since it's tied to a
> single
> regex. For example, something roughly like:
> 
> http://paste.openstack.org/show/583165/
> 
> would use yaml to make it a generic framework. 

Okay, I see that it can be done for this example. But I look at that
yaml and all I see is that after a couple of definitions:

count_api_stuffs:
  - name: Instance Boots
regex = '(req-\S+).*Starting instance'
log_file = n-cpu.txt
  - name: Volume something
regex = '(req-\S+).*Random Cinder Event to Count'
log_file = c-api.txt
time_api_event:
   -name: Foo
   regex = 
another_thing:
yet_another:

there's no context to what each of these does and the logic is competely
decoupled. I guess I'm just personally biased to wanting to see it done
in code to help my understanding.

If yaml is what makes this acceptable then maybe I'll rework to do it
that way eventually. What I was hoping to do was start small, prove the
concept in a project while maintaining flexibility, and then look at
expanding for general use. Essentially the oslo model. I'm a little
surprised that there's so much resistance to that.


> 
> That's what I was trying to get across before, what might seem project
> specific
> is just a search pattern or combination of them for a log file. These
> patterns
> are generic enough we can extract out the project specific pieces and it
> could
> be made configurable to enable reuse between projects. Especially since
> most of
> the logging format should be standardized. (this would further expose
> where
> things differ too)
> 
> So yeah I still think this makes sense as a QA or Infra project. Although
> I'm
> wondering if there is a better way to harvest this info from a run.
> 
> As an aside it does feel like we should have a better mechanism for most
> of
> this. Like for counting booted instances I normally use the qemu log
> files, for
> example:
> 
> http://logs.openstack.org/49/373249/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b0159bf/logs/libvirt/qemu/
> 
> which shows how many instances we booted during that run. But I guess it
> really depends on what we're looking for. So there probably isn't an
> easier
> answer.
> 
> -Matt Treinish
> 
> > > > > 
> > > > > I would caution against doing it as a one off in a project repo 
> > > > > doesn't
> > > > > seem
> > > > > like the best path forward for something like this. We actually tried 
> > > > > to
> > > > > do
> > > > > something similar to that in the past inside the tempest repo:
> > > > > 
> > > > > http://git.openstack.org/cgit/openstack/tempest/tree/tools/check_logs.py
> > > > > 
> > > > > and
> > > > > 
> > > > > http://git.openstack.org/cgit/openstack/tempest/tree/tools/find_stack_traces.py
> > > > > 
> > > > > all it did was cause confusion because no one knew where the output 
> > > > > was
> > > > > coming
> > > > > from. Although, the output from those tools was also misleading, which
> > > > > was
> > > > > likely a bigger problm. So this probably won't be an issue if you add 
> > > > > a
> > > > > json
> > > > > output to the jobs.
> > > > > 
> > > > > I also wonder if the JSONFormatter from oslo.log:
> > > > > 
> > > > > http://docs.openstack.org/developer/oslo.log/api/formatters.html#oslo_log.formatters.JSONFormatter
> > > > > 
> > > > > would be useful here. We can proabbly turn that on if it makes things
> > > > > easier.
> > > > > 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   

[openstack-dev] [release][neutron] neutron Newton RC2 available

2016-09-27 Thread Doug Hellmann
Hello everyone,

A new release candidate for neutron for the end of the Newton cycle
is available!  You can find the source code tarballs at:

https://tarballs.openstack.org/neutron-dynamic-routing/neutron-dynamic-routing-9.0.0.0rc2.tar.gz
https://tarballs.openstack.org/neutron-fwaas/neutron-fwaas-9.0.0.0rc2.tar.gz
https://tarballs.openstack.org/neutron-lbaas/neutron-lbaas-9.0.0.0rc2.tar.gz
https://tarballs.openstack.org/neutron-vpnaas/neutron-vpnaas-9.0.0.0rc2.tar.gz
https://tarballs.openstack.org/neutron/neutron-9.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the final
Newton release on 6 October. You are therefore strongly
encouraged to test and validate these tarballs!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/neutron-dynamic-routing/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/neutron-fwaas/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/neutron-lbaas/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/neutron-vpnaas/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/neutron/+filebug

and tag it *newton-rc-potential* to bring it to the neutron release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2016-09-27 Thread Emilien Macchi
This is my candidacy for a position on the OpenStack Technical
Committee.

https://review.openstack.org/378054

For those who don't know me, it has been 4 years that I have been
consistently working on OpenStack to make deployments production-ready:

- Puppet OpenStack core contributor, and ex-PTL (18 months).

Writing Puppet modules that allow operators to deploy OpenStack in production.

- TripleO core contributor, and current PTL.

Contribute to TripleO which is an OpenStack installer used by operators to
deploy and operate OpenStack.

- OpenStack Infrastructure contributor.

Improving Continuous Integration for different projects in OpenStack.


Here are some aspects that motivate me to be part of TC:

- Make sure it works outside Devstack.

There is a huge gap between what is tested by Devstack gate and what operators
deploy on the field.  This gap tends to stretch the feedback loop between
developers and operators.  As a community, we might want to reduce this gap
and make OpenStack testing more effective and more realistic.
That's an area of focus I would like to work and spread over OpenStack
projects if I'm elected.


- Keep horizontal collaboration.

While our technical areas of focus might be different, our main goal is to
make OpenStack better and it's important our governance reflects it.
I believe that collaboration works [1] and we have seen positive results over
the last cycles.  I would like TC to keep supporting such efforts and
continue to
make OpenStack a great forum to work.


- Share my experience with Technical Committee.

Being PTL during 18 months, I contributed to drive Puppet OpenStack project to
success [2] and I learned so much about communication and technical leadership.
If I'm elected I'll do my best to re-use this experience from my
previous roles in
OpenStack.  I've been learning on the go and plan to continue that way by
keeping my mind open to feedback.


- Represent all family members.

Because I've worked on different areas of OpenStack (CI, deployments, testing,
etc), I'll represent the voice of any community member: developers, deployers,
operators and users.  I believe in making decisions by wearing
different hats and
find the best solution for everyone.



Over the last years, I've helped into building communities of people who
aim to make OpenStack better.  I would be honored to serve as a TC member.

Thank you for your consideration,


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066544.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103372.html
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Sean Dague
On 09/27/2016 05:45 PM, Andrew Laski wrote:
> 
> 
> On Tue, Sep 27, 2016, at 04:43 PM, Matthew Treinish wrote:
> 

 I definitely can see the value in having machine parsable log stats in
 our
 artifacts, but I'm not sure where project specific pieces would come
 from. But,
 given that hypothetical I would say as long as you made those pieces
 configurable (like a yaml syntax to search for patterns by log file or
 something) and kept a generic framework/tooling for parsing the log stats
 I
 think it's still a good fit for a QA or Infra project. Especially if you
 think
 whatever pattern you're planning to use is something other projects would
 want
 to reuse.
>>>
>>> My concern here is that I want to go beyond simple pattern matching. I
>>> want to be able to maintain state while parsing to associate log lines
>>> with events that came before. The project specific bits I envision are
>>> the logic to handle that, but I don't think yaml is expressive enough
>>> for it. I came up with a quick example at
>>> http://paste.openstack.org/show/583160/ . That's Nova specific and
>>> beyond my capability to express in yaml or elastic-recheck.
>>
>> That's pretty simple to do with yaml too. Especially since it's tied to a
>> single
>> regex. For example, something roughly like:
>>
>> http://paste.openstack.org/show/583165/
>>
>> would use yaml to make it a generic framework. 
> 
> Okay, I see that it can be done for this example. But I look at that
> yaml and all I see is that after a couple of definitions:
> 
> count_api_stuffs:
>   - name: Instance Boots
> regex = '(req-\S+).*Starting instance'
> log_file = n-cpu.txt
>   - name: Volume something
> regex = '(req-\S+).*Random Cinder Event to Count'
> log_file = c-api.txt
> time_api_event:
>-name: Foo
>regex = 
> another_thing:
> yet_another:
> 
> there's no context to what each of these does and the logic is competely
> decoupled. I guess I'm just personally biased to wanting to see it done
> in code to help my understanding.
> 
> If yaml is what makes this acceptable then maybe I'll rework to do it
> that way eventually. What I was hoping to do was start small, prove the
> concept in a project while maintaining flexibility, and then look at
> expanding for general use. Essentially the oslo model. I'm a little
> surprised that there's so much resistance to that.

I think the crux of the resistance isn't to starting small. It's to
doing all the gate plumbing for a thing in one tree, then do the replumb
later. And it mostly comes from having to connect all those pieces then
stage them back out in a non breaking way later.

If this was stuff where the execution control was also embedded in
tox.ini, so changes in calling were easily addressed there, then it
could be run in a pretty quiet controlled experiment.

But it pretty much starts multi project from day one, because it needs
specific things (and guaruntees) from devstack and devstack-gate from
the get go. So thinking about this as a multi project effort from the
get go seems worth while.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections][TC] TC Candidacy

2016-09-27 Thread John Davidge
Thanks for the questions Jay, answers inline.

On 9/26/16, 8:39 PM, Jay Pipes wrote:
>Who decides what is integral to OpenStack and what merely "enhances" it,
>though? The TC? The DefCore group? The Board of Directors? One might say
>all three groups have a say in defining what "is OpenStack", no? And
>therefore all three groups would decide what is "integral" to OpenStack.

This will undoubtedly be the most difficult part of the transition, so
making these decisions transparently will be essential. As a starting
point I would suggest we use our existing definitions of Core and Optional
services found here: https://www.openstack.org/software/

Everything in the 'Core' section would fall within the definition of
OpenStack, everything else would live in the OpenStack Family. This isn't
a change that would happen overnight, and of course we¹d seek many rounds
of input from all interested parts of the community.

>We do indeed have a long way to go in improving the operator's
>experience for many OpenStack projects.
>
>However, remember that many of the OpenStack projects came into
>existence because operators were asking for a certain use case to be
>fulfilled. I'm uncertain how putting some projects into a
>not-really-OpenStack-but-related bucket will help operators much. Is the
>pain for operators that there are too many projects in OpenStack, or is
>the pain that those projects are not universally installable or usable
>in the same fashion?

Absolutely, listening to operators should continue to be the primary
driver for a lot of our decision making. For the last 3 or 4 summits I've
found the operator feedback sessions to be the most valuable, and at least
one led directly to a new feature (neutron purge).

Not being an operator myself I'd defer to seeking feedback from the ops
community during this process, but a few Big Tent related issues I've
heard include:

* Do I need to support *all* of these projects?
* Why doesn¹t everything follow the same release schedule any more?
* How many of these are mature enough to be useful?

>What is OpenStack's core purpose? :) The OpenStack mission is
>intentionally encompassing of a wide variety of users and use cases. The
>Big Tent, by the way, did not affect this fact. The OpenStack mission
>pre-exists the Big Tent. All the Big Tent did was say that projects that
>wanted to be official OpenStack projects needed to follow the 4 Opens,
>submit to TC governance, and further the mission of OpenStack.
>
>It sounds like you would like to limit the scope of the OpenStack
>mission, which is not the same as getting rid of the Big Tent. If that's
>the case, hey, totally cool with me :) But let's be specific about what
>it is you are suggesting.

OpenStack does not have a core purpose. Not one that everyone agrees on
anyway. Some would like it to be an Apache-like collection of loosely
related open source projects. Others would like to see it be a
laser-focused operating system for the data center. I'd say that it
started out closer to the latter and is slowly drifting towards the
former. The discussion surrounding the "Write down OpenStack Principles"
patch has shown us that the closest we've had to an official mission
statement until now was the result of a TC vote in 2011:

"A single product made of a lot of independent, but cooperating,
components."

Now obviously this is somewhat vague and open to interpretation, but to me
the "single product" part suggests a level of focus that is missing today.
This puts us in the position of deciding whether we need to re-focus
OpenStack to better match the mission statement, or change our mission
statement to better reflect reality. I'd like to do a bit of both. Limit
the scope of OpenStack to that of its core components, while providing a
framework for official projects that enhance its capabilities.


>Hmm, I disagree about that. I think that experience actually *has* shown
>us that there is a single set of rules that can/should be applied to all
>projects that wish to be called an OpenStack project.

We may have to agree to disagree here. Look at recent efforts to enforce
python 3 compatibility, for example. Some projects had reasons why they
didn't want to, others had reasons why they couldn't, and some simply
didn't view it as a priority. We'd be much more productive in defining and
enforcing rules like this if there was a narrower scope of projects they
applied to.

>> * Define OpenStack as its core components
>
>Which components would these be? Folks can (and will) argue with you
>that a particular service is critical and should be considered core. But
>differing opinions here will lead to a decision-making inertia that will
>be difficult to overcome. You've been warned. :)

See above. This definition already exists, but I acknowledge that it will
need to be iterated upon. I'd like to point out that there will be
benefits of being in the OpenStack Family, such as not needing to comply
with the more prescriptive rules as 

Re: [openstack-dev] [puppet] [infra] Request for old branches removal

2016-09-27 Thread Emilien Macchi
Hi Josh,

Thanks a lot for your help !
I've noticed some of them still have stale branches, example:
https://github.com/openstack/puppet-keystone/branches with essex and folsom
or https://github.com/openstack/puppet-nova/branches with diablo!
(yeah very old :-))
also puppet-cinder, puppet-glance, puppet-horizon, puppet-neutron, puppet-swift,
puppet-tempest (remove fix_image_version branch). So at the end we
only keep stable/liberty and stable/mitaka.

Could we also get rid of them?

Thanks again,

On Tue, Sep 27, 2016 at 8:05 AM, Joshua Hesketh
 wrote:
> Hi Emilien,
>
> I've removed all of the old branches on the specified repos and created tags
> in their place. Let me know if there are any problems.
>
> Cheers,
> Josh
>
> On Mon, Sep 26, 2016 at 3:51 PM, Emilien Macchi  wrote:
>>
>> Greatings Infra,
>>
>> This is an official request to remove old branches for Puppet OpenStack
>> modules:
>>
>> puppet-ceilometer
>> puppet-cinder
>> puppet-glance
>> puppet-heat
>> puppet-horizon
>> puppet-keystone
>> puppet-neutron
>> puppet-nova
>> puppet-openstack_extras
>> puppet-openstacklib
>> puppet-swift
>> puppet-tempest
>>
>> Please remove all branches before Kilo (Kilo was already removed).
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo] Parse ISO8601 (open) time intervals

2016-09-27 Thread milanisko k
Hello Stackers!

The ironic inspector project keeps track of introspection finished_at time
stamps.
We're just discussing how to reasonably query time ranges over the API[1]
to serve matching introspection statuses to the user.
Wikipedia[2] mentions the ISO8601 time interval specification (and there
are open-interval extensions to that).
It would be nice to be able to specify a query like :
 /v1/introspection?finished_at=2016:09:27:14:17/PT1H
to fetch all introspection statuses that finished within 1hour around 14:17
Today,
or to be able to state an open-ended interval:
/v1/introspection?finished_at=2016:09:27:14:17/
but oslo_utils.timeutils lacks parsing support for ISO8061 time intervals.

I'd like to ask whether other projects need to parse time intervals and/or
how do they achieve that.

Thanks!
milan

[1]
https://review.openstack.org/#/c/375045/3/specs/list-introspection-statuses.rst
[2] https://en.wikipedia.org/wiki/ISO_8601#Time_intervals
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][Sahara] Sahara Newton RC2 available

2016-09-27 Thread Davanum Srinivas
Hello everyone,

The release candidate(s) for Sahara for the end of the Newton cycle
are available! You can find the RC2 source code tarballs at:

https://tarballs.openstack.org/sahara/sahara-5.0.0.0rc2.tar.gz
https://tarballs.openstack.org/sahara-dashboard/sahara-dashboard-5.0.0.0rc2.tar.gz
https://tarballs.openstack.org/sahara-extra/sahara-extra-5.0.0.0rc2.tar.gz
https://tarballs.openstack.org/sahara-image-elements/sahara-image-elements-5.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this RC2 will be formally released as the final
Newton release on 6 October. You are therefore strongly encouraged to
test and validate these tarballs!

Alternatively, you can directly test the stable/newton release branches at:

http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/sahara-dashboard/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/sahara-extra/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/sahara-image-elements/log/?h=stable/newton

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/sahara/+filebug

and tag it *newton-rc-potential* to bring it to the Sahara release
crew's attention.

Thanks,
Dims (On behalf of the Release Team)

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron[SR-IOV] SR-IOV meeting is cancel today

2016-09-27 Thread Moshe Levi
Hi all,

Sorry for the late mail, but I have to cancel the meeting today.

Thanks,
Moshe Levi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] Request for old branches removal

2016-09-27 Thread Joshua Hesketh
Hi Emilien,

I've removed all of the old branches on the specified repos and created
tags in their place. Let me know if there are any problems.

Cheers,
Josh

On Mon, Sep 26, 2016 at 3:51 PM, Emilien Macchi  wrote:

> Greatings Infra,
>
> This is an official request to remove old branches for Puppet OpenStack
> modules:
>
> puppet-ceilometer
> puppet-cinder
> puppet-glance
> puppet-heat
> puppet-horizon
> puppet-keystone
> puppet-neutron
> puppet-nova
> puppet-openstack_extras
> puppet-openstacklib
> puppet-swift
> puppet-tempest
>
> Please remove all branches before Kilo (Kilo was already removed).
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Jordan Pittier
Hi,

On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:

> Dear Stackers,
> I'd like to gather some overview on the $Sub: is there some infrastructure
> in place to gather such stats? Are there any groups interested in it? Any
> plans to establish such infrastructure?
>
I am working on such a tool with mixed results so far. Here's my approach
taking let's say Nova as an example:

1) Print all the routes known to nova (available as a python-routes object:
 nova.api.openstack.compute.APIRouterV21())
2) "Normalize" the Nova routes
3) Take the logs produced by Tempest during a tempest run (in
logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
8774)
4) "Normalize" the tested-by-tempest Nova routes.
5) Compare the two sets of routes
6) 
7) Profit !!

So the hard part is obviously the normalizing of the URLs. I am currently
using a tons of regex :) That's not fun.

I'll let you guys know if I have something to show.

I think there's real interest on the topic (it comes up every year or so),
but no definitive answer/tool.

Cheers,
Jordan

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread milanisko k
Dear Stackers,
I'd like to gather some overview on the $Sub: is there some infrastructure
in place to gather such stats? Are there any groups interested in it? Any
plans to establish such infrastructure?

Thanks!
milan

PS: I used to maintain a tool[1] that once collect multi-node integration
test coverage stats  for another project; it's outdated but I could
possibly resurrect it...

[1] https://github.com/RedHatQE/python-moncov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Andrew Laski


On Tue, Sep 27, 2016, at 05:54 PM, Sean Dague wrote:
> On 09/27/2016 05:45 PM, Andrew Laski wrote:
> > 
> > 
> > On Tue, Sep 27, 2016, at 04:43 PM, Matthew Treinish wrote:
> > 
> 
>  I definitely can see the value in having machine parsable log stats in
>  our
>  artifacts, but I'm not sure where project specific pieces would come
>  from. But,
>  given that hypothetical I would say as long as you made those pieces
>  configurable (like a yaml syntax to search for patterns by log file or
>  something) and kept a generic framework/tooling for parsing the log stats
>  I
>  think it's still a good fit for a QA or Infra project. Especially if you
>  think
>  whatever pattern you're planning to use is something other projects would
>  want
>  to reuse.
> >>>
> >>> My concern here is that I want to go beyond simple pattern matching. I
> >>> want to be able to maintain state while parsing to associate log lines
> >>> with events that came before. The project specific bits I envision are
> >>> the logic to handle that, but I don't think yaml is expressive enough
> >>> for it. I came up with a quick example at
> >>> http://paste.openstack.org/show/583160/ . That's Nova specific and
> >>> beyond my capability to express in yaml or elastic-recheck.
> >>
> >> That's pretty simple to do with yaml too. Especially since it's tied to a
> >> single
> >> regex. For example, something roughly like:
> >>
> >> http://paste.openstack.org/show/583165/
> >>
> >> would use yaml to make it a generic framework. 
> > 
> > Okay, I see that it can be done for this example. But I look at that
> > yaml and all I see is that after a couple of definitions:
> > 
> > count_api_stuffs:
> >   - name: Instance Boots
> > regex = '(req-\S+).*Starting instance'
> > log_file = n-cpu.txt
> >   - name: Volume something
> > regex = '(req-\S+).*Random Cinder Event to Count'
> > log_file = c-api.txt
> > time_api_event:
> >-name: Foo
> >regex = 
> > another_thing:
> > yet_another:
> > 
> > there's no context to what each of these does and the logic is competely
> > decoupled. I guess I'm just personally biased to wanting to see it done
> > in code to help my understanding.
> > 
> > If yaml is what makes this acceptable then maybe I'll rework to do it
> > that way eventually. What I was hoping to do was start small, prove the
> > concept in a project while maintaining flexibility, and then look at
> > expanding for general use. Essentially the oslo model. I'm a little
> > surprised that there's so much resistance to that.
> 
> I think the crux of the resistance isn't to starting small. It's to
> doing all the gate plumbing for a thing in one tree, then do the replumb
> later. And it mostly comes from having to connect all those pieces then
> stage them back out in a non breaking way later.
> 
> If this was stuff where the execution control was also embedded in
> tox.ini, so changes in calling were easily addressed there, then it
> could be run in a pretty quiet controlled experiment.
> 
> But it pretty much starts multi project from day one, because it needs
> specific things (and guaruntees) from devstack and devstack-gate from
> the get go. So thinking about this as a multi project effort from the
> get go seems worth while.

I totally understand where you're coming from. I just see it
differently.

The way it was done did not affect any other projects, and the plumbing
used is something that could have easily be left in. And as luck would
have it there's another patch up to use that same plumbing so it's
probably going in anyways.

I completely agree that before expanding this to be in any way cross
project it's worth figuring out a better way to do it. But at this point
I don't feel comfortable enough with a long term vision to tackle that.
I would much prefer to experiment in a small way before moving forward.

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Ocata Design Summit ideas kick-off

2016-09-27 Thread Armando M.
Hi folks,

The summit is less than a month away and it's the time of the year where we
need to plan for design summit sessions.

This time we are going for 10 fishbowl sessions, plus Friday [0].

We will break down sessions in three separate tracks as we did the last two
summits. Each track will have its own theme and more details will be
provided in due course.

I started etherpad [1] to collect inputs and ideas. Please start
brainstorming! I'll reach out to the driver team members individually to
work out a more formal agenda once we get some ideas off the ground.

Cheers,
Armando

[0] http://lists.openstack.org/pipermail/openstack-dev/
2016-September/103851.html
[1] https://etherpad.openstack.org/p/ocata-neutron-summit-ideas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The end-user test suite for OpenStack clusters

2016-09-27 Thread Ken'ichi Ohmichi
Hi Timur,

Thanks for picking this up, that is interesting for me.

2016-09-22 5:58 GMT-07:00 Timur Nurlygayanov :
>
> we have an idea to create the test suite with destructive/HA and advanced
> end-user scenarios for the OpenStack clusters. This test suite will contains
> advanced scenario integration tests for OpenStack clusters to make sure that
> the cluster is ready for the production.
>
> The test cases which we want to cover in this test suite:
> 1) All simple and advanced destructive actions, like a reboot of the nodes,
> restart OpenStack services and etc. (we can probably use of-faults library
> [1], which we already use in Rally)
> 2) All advanced test scenarios like a migration of the bunch of VMs between
> nodes and booting of the VMs with large images (10+ Gb), send traffic
> between VMs and in parallel restart Neutron l3 agents and etc.
>
> The key requirements:
> 1) The framework should know the details of the deployments (how many nodes
> we have, how to ssh to OpenStack nodes, how to restart the nodes and etc.).
> This is why we don't want to add such "advanced" and HA-focused test
> scenarios to Tempest.

Yeah, this point is right. This "advanced" way is different from the
design principle of Tempest[1].
I am guessing the above "restart nodes" is for verifying each
OpenStack service restarts successfully, right?
For productions(or distributions), this verifying point seems
important because service scripts need to restart OpenStack services
automatically.
But these service scripts are provided by distributors, and Devstack
itself doesn't contain service scripts IIUC.
So I'd like to know how to verify it on Devstack clouds.

Thanks
Ken Ohmichi
---

[1]: https://github.com/openstack/tempest#design-principles

> 2) We should be ready to run these tests for any clouds: DevStack clouds (we
> can skip HA cases for DevStack), Fuel clouds, clouds which were deployed by
> Ansible or Puppet tools.
> 3) This framework should allow reproduce the issue in a repeatable manner,
> this is why we can't just cover all the tests with Rally load tests +
> destructive plugins (we are working on this right now too to have an ability
> to test HA-related scenarios under the load).
>
> As we discussed on the OpenStack summit a year ago it is better to move such
> test suite in a separate repository and this framework can became a part of
> the QA (or at least BigTent) program in OpenStack.
>
> I've created the commit to OpenStack project-config repository:
> https://review.openstack.org/#/c/374667/
>
> Could you please take a look?
>
> We understand that it will be hard to add such test suite to the gates for
> every commit in OpenStack because we will need a lot of hardware. We don't
> want to add these tests to the per-commit gates for now, it is ok to run
> them just once a day, for example. And we definitely need to have such test
> suite to validate our own pre-production clouds.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Design sessions for Barcelona

2016-09-27 Thread Rabi Mishra
Hi All,

We've 3 Fish-bowl and 6 WR slots available this summit for the design
sessions. We're collecting the session ideas on this[1] etherpad. Please
add any other topic/ideas, that you would like to be included.

We'll discuss these in this/next week team meetings to prioritize them.

[1] https://etherpad.openstack.org/p/ocata-heat-sessions

-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-27 Thread Matt Riedemann

On 9/20/2016 9:22 AM, Dan Smith wrote:


I'll also see about writing up some docs about the expected workflow
here. Presumably that needs to go in some fancy docs and not into the
devref, right? Can anyone point me to where that should go?

--Dan



I'd think something in here:

http://docs.openstack.org/ops-guide/ops-upgrades.html

I'm surprised that doesn't even mention running nova-manage db sync at all.

That doc also links into a neutron-specific upgrade doc:

http://docs.openstack.org/developer/neutron/devref/upgrade.html

So maybe we should put together, or cleanup, the nova upgrades doc in 
the devref and link from the more general doc:


http://docs.openstack.org/developer/nova/upgrade.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-09-27 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [devstack] nova-api did not start

2016-09-27 Thread Tony Breeds
On Wed, Sep 28, 2016 at 11:10:52AM +0800, xiangxinyong wrote:
> Hi guys,
> 
> 
> When i setup OpenStack by devstack, 
> I have got an error message "nova-api did not start".
> 
> 
> [Call Trace]
> ./stack.sh:1242:start_nova_api
> /home/edison/devstack/lib/nova:802:die
> [ERROR] /home/edison/devstack/lib/nova:802 nova-api did not start
> Error on exit
> World dumping... see /opt/stack/logs/worlddump-2016-09-27-205614.txt for 
> details

Try looking in /opt/stack/logs/n-api*

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting times

2016-09-27 Thread Rabi Mishra
Hi All,

I think the current meeting times  i.e. 08:00 UTC and 15:00 UTC on
alternate weeks are working well for us. Though 15:00 UTC is little late
for me, I propose we continue with the same for this cycle.

With the geographical spread of the team, it's difficult to arrive at a
time that suits all. However, if you've any other/better suggestion, do let
me know.

-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] nova-api did not start

2016-09-27 Thread xiangxinyong
Hi guys,


When i setup OpenStack by devstack, 
I have got an error message "nova-api did not start".


[Call Trace]
./stack.sh:1242:start_nova_api
/home/edison/devstack/lib/nova:802:die
[ERROR] /home/edison/devstack/lib/nova:802 nova-api did not start
Error on exit
World dumping... see /opt/stack/logs/worlddump-2016-09-27-205614.txt for details


Welcome your suggestions.
Thanks very much.


Best Regards,
  xiangxinyong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-27 Thread Joshua Harlow

ACTION: we should make sure workarounds are advertised better
ACTION: we should have some document about "when cells"?

This is a difficult question to answer because "it depends." It's akin
to asking "how many nova-api/nova-conductor processes should I run?"
Well, what hardware is being used, how much traffic do you get, is it
bursty or sustained, are instances created and left alone or are they
torn down regularly, do you prune your database, what version of rabbit
are you using, etc...

I would expect the best answer(s) to this question are going to come
from the operators themselves. What I've seen with cellsv1 is that
someone will decide for themselves that they should put no more than X
computes in a cell and that information filters out to other operators.
That provides a starting point for a new deployment to tune from.


I don't think we need "don't go larger than N nodes" kind of advice. But
we should probably know what kinds of things we expect to be hot spots.
Like mysql load, possibly indicated by system load or high level of db
conflicts. Or rabbit mq load. Or something along those lines.

Basically the things to look out for that indicate your are approaching
a scale point where cells is going to help. That also helps in defining
what kind of scaling issues cells won't help on, which need to be
addressed in other ways (such as optimizations).


Big +1 if we can really get out of the behavior/pattern of 
thinking/thought of guessing at the overall system characteristics 
*somehow* I think it would be great for our own communities maturity and 
for each project/s. Even though I know such things are hard, it scares 
the bejeezus out of me when we (as a group) create software but can't 
give recommendations on its behavioral characteristics (we aren't doing 
quantum physics here the last time I checked).


Just some ideas:

* Rally maybe can help here?
* Fixing a standard set of configuration options and testing that at 
scale (using the intel lab?) - and then possibly using rally (or other) 
to probe the system characteristics and then giving recommendations 
before releasing the software for general consumption based on observed 
system characteristics (this is basically what operators are going to 
have to do anyway to qualify a release, especially if the community 
isn't doing it and/or is shying away from doing it).


I just have a hard time accepting that tribal knowledge about scale that 
has to filter from operators to operator (yes I know from personal 
experience this is how things trickled down) is a good way to go. It 
reminds me of the medicine and practices in the late 1800s where all 
sorts of quackery science was happening; and IMHO we can do better than 
this :)


Anyway, back to your regularly scheduled programming,

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-27 Thread Steven Dake (stdake)
Dane,

I’ve heard Yolanda has done good work on making disk image builder build fedora 
atomic properly consistently.  This may work better than the current image 
building tools available with atomic if you need to roll your own.  Might try 
pinging her on irc for advice if you get jammed up here.  Might consider 
consulting tango as well as I handed off my knowledge in this area to him first 
and he has distributed to the rest of the Magnum core reviewer team.  I’m not 
sure if tango and Yolanda have synced on this – recommend checking with them.

Seems important to have a working atomic image for both Mitaka and Newton.

Regards
-steve


From: "Dane Leblanc (leblancd)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, September 8, 2016 at 2:18 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes 
external load balancer (for stable/mitaka)

Does anyone have a pointer to a Fedora Atomic image that works with 
stable/mitaka Magnum, and supports the kubernetes external load balancer 
feature [1]?

I’m trying to test the kubernetes external load balancer feature with 
stable/mitaka Magnum. However, when I try to bring up a load-balanced service, 
I’m seeing these errors in the kube-controller-manager logs:
E0907 16:26:54.375286   1 servicecontroller.go:173] Failed to process 
service delta. Retrying: failed to create external load balancer for service 
default/nginx-service: SubnetID is required

I verified that I have the subnet-id field set in the [LoadBalancer] section in 
/etc/sysconfig/kube_openstack_config.

I’ve tried this using the following Fedora Atomic images from [2]:
fedora-21-atomic-5.qcow2
fedora-21-atomic-6.qcow2
fedora-atomic-latest.qcow2

According to the Magnum external load balancer blueprint [3], there were 3 
patches in kubernetes that are required to get the OpenStack provider plugin to 
work in kubernetes:
https://github.com/GoogleCloudPlatform/kubernetes/pull/12203
https://github.com/GoogleCloudPlatform/kubernetes/pull/12262
https://github.com/GoogleCloudPlatform/kubernetes/pull/12288
The first of these patches, “Pass SubnetID to vips.Create()”, is apparently 
necessary to fix the “SubnetID is required” error shown above.

According to the Magnum external load balancer blueprint [3], the 
fedora-21-atomic-6 image should include the above 3 fixes:
“Our work-around is to use our own custom Kubernetes build (version 1.0.4 + 3 
fixes) until the fixes are released. This is in image fedora-21-atomic-6.qcow2”
However, I’m still seeing the “SubnetID is required” errors with this image 
downloaded from [2]. Here are the kube versions I’m seeing with this image:
[minion@k8-64n4bna2v6-0-ffukgho7n7tf-kube-master-fif5b6pivdmy sysconfig]$ rpm 
-qa | grep kube
kubernetes-node-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-client-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-master-1.2.0-0.15.alpha6.gitf0cd09a.fc23.x86_64
[minion@k8-64n4bna2v6-0-ffukgho7n7tf-kube-master-fif5b6pivdmy sysconfig]$

Does anyone have a pointer to a Fedora Atomic image that contains the 3 
kubernetes fixes listed earlier (and works with stable/mitaka)?

Thanks!
-Dane

[1] http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
[2] https://fedorapeople.org/groups/magnum/
[3] https://blueprints.launchpad.net/magnum/+spec/external-lb

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack, Tempest, and TLS

2016-09-27 Thread Clark Boylan
On Fri, Sep 23, 2016, at 05:04 PM, Clark Boylan wrote:
> Earlier this month there was a thread on replacing stud in devstack for
> the tls-proxy service [0]. Over the last week or so a bunch of work has
> happened around this so I figured I would send an update.
> 
> Tempest passes against devstack with some edits to one of the object
> tests to properly handle 304s [1].
> 
> Multinode devstack and tempest pass with a small change to devstack-gate
> [2] to copy the CA to all test nodes which needs a small change to
> devstack [3] to avoid overwriting the CA. Note the devstack-gate change
> needs to deal with some new ansible issues so isn't ready for merging
> just yet.
> 
> Also noticed that Ironic's devstack plugin isn't configured to deal with
> a devstack that runs the other services with TLS. This is mostly
> addressed by a small change to set the correct glance protocol and swift
> url [4]. However tests for this continue to fail if TLS is enabled
> because the IPA image does not trust the devstack created CA which has
> signed the cert in front of glance.
> 
> Would be great if people could review these. Assuming reviews happen we
> should be able to run the core set of tempest jobs with TLS enabled real
> soon now. This will help us avoid regressions like the one that hit OSC
> in which it could no longer speak to a neutron fronted with a proxy
> terminating TLS.
> 
> Also, I am learning that many of our services require redundant and
> confusing configuration. Ironic for example needs to have
> glance_protocol set even though it appears to get the actual glance
> endpoint from the keystone catalog. You also have to tell it where to
> find swift except that if it is already using the catalog why can't it
> find swift there? Many service configs have an auth_url and auth_uri
> under [keystone_authtoken]. The values for them are different, but I am
> not sure why we need to have an auth_uri and auth_uri and why they
> should be different urls (yes both are urls). Cinder requires you set
> both osapi_volume_base_URL and public_endpoint to get proper https
> happening.
> 
> Should I be filing bugs for these things? are they known issues? is
> anyone interested in simplifying our configs?
> 
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2016-September/102843.html
> [1] https://review.openstack.org/#/c/374328/
> [2] https://review.openstack.org/373219
> [3] https://review.openstack.org/375724
> [4] https://review.openstack.org/375649

Another quick update on this. Enough of these changes have merged that
we are now running with tls-proxy enabled on changes against master in
the "vanilla" devstack/tempest jobs. You should be seeing https urls
floating around now.

I still need to get https://review.openstack.org/373219 in before we can
turn this on for multinode testing but it does appear to be working now.
Change 372374 can be used to verify the neutron multinode job passes
with 373219 in place. Reviews very welcome. I have been trying to use
topic:devstack-tls to group together things ready for review.

Once multinode testing has tls-proxy enabled the next thing I think we
should be talking about is enabling this by default in devstack. As
mentioned before ironic doesn't work due to IPA images not trusting
glance's cert. Swift's functional tests don't currently work against
https keystone as they assume http as well. All this to say if you have
a devstack plugin or testing that depends on devstack now would be a
great time to turn on tls-proxy and see if your things work with it
(easy mode is depends-on 373219). I think that if we can identify places
where it doesn't work and fixing it would require a lot of effort we
should just proactively disable it in the jobs. That way we can turn it
on by default for the default vanilla case.

Thank you,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-27 Thread Rabi Mishra
On Wed, Sep 28, 2016 at 6:21 AM, Jiahao Liang <
jiahao.li...@oneconvergence.com> wrote:

>
>
> On Tue, Sep 27, 2016 at 5:35 PM, Rabi Mishra  wrote:
>
>> On Wed, Sep 28, 2016 at 1:01 AM, Zane Bitter  wrote:
>>
>>> On 27/09/16 15:11, Jiahao Liang wrote:
>>>
 Hello all,

 I am trying to use heat to launch lb resources with Octavia as backend.
 The template I used is
 from https://github.com/openstack/heat-templates/blob/master/hot/
 lbaasv2/lb_group.yaml.

 Following are a few observations:

 1. Even though Listener was created with ERROR status, heat will still
 go ahead and mark it Creation Complete. As in the heat code, it only
 check whether root Loadbalancer status is change from PENDING_UPDATE to
 ACTIVE. And Loadbalancer status will be changed to ACTIVE anyway no
 matter Listener's status.

>>>
>>> That sounds like a clear bug.
>>>
>>
>> It seems we're checking for any exceptions from the client[1], before
>> checking for the
>> loadbalancer status. I could not see any other way to check the listener
>> status afterwards.
>> Probably a lbaas bug with octavia driver?
>>
>> Could you please raise a bug with the heat/lbaas logs?
>>
>
>> [1]  https://git.openstack.org/cgit/openstack/heat/tree/heat/engi
>> ne/resources/openstack/neutron/lbaas/listener.py#n183
>>
>
>  In Octavia, creating resources (listeners, pools, etc.) is an async
> operation which it wouldn't raise any exception.
> A normal workflow is:
> 1. heat/neutron client send a create api to Octavia
> 2. Octavia return a response to client and set the resource to
> PENDING_CREATE (no exception will throw to client if the api goes through.)
> 3. It creation succeeds, Octavia set that resource to ACTIVE; otherwise,
> set it to ERROR.
>

Unlike loadbalancer, I don't see any provisioning_status attribute for
listener in lbaas api[1].

[1]
http://git.openstack.org/cgit/openstack/neutron-lbaas/tree/neutron_lbaas/extensions/loadbalancerv2.py#n192
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-27 Thread Jiahao Liang
On Tue, Sep 27, 2016 at 5:35 PM, Rabi Mishra  wrote:

> On Wed, Sep 28, 2016 at 1:01 AM, Zane Bitter  wrote:
>
>> On 27/09/16 15:11, Jiahao Liang wrote:
>>
>>> Hello all,
>>>
>>> I am trying to use heat to launch lb resources with Octavia as backend.
>>> The template I used is
>>> from https://github.com/openstack/heat-templates/blob/master/hot/
>>> lbaasv2/lb_group.yaml.
>>>
>>> Following are a few observations:
>>>
>>> 1. Even though Listener was created with ERROR status, heat will still
>>> go ahead and mark it Creation Complete. As in the heat code, it only
>>> check whether root Loadbalancer status is change from PENDING_UPDATE to
>>> ACTIVE. And Loadbalancer status will be changed to ACTIVE anyway no
>>> matter Listener's status.
>>>
>>
>> That sounds like a clear bug.
>>
>
> It seems we're checking for any exceptions from the client[1], before
> checking for the
> loadbalancer status. I could not see any other way to check the listener
> status afterwards.
> Probably a lbaas bug with octavia driver?
>
> Could you please raise a bug with the heat/lbaas logs?
>

> [1]  https://git.openstack.org/cgit/openstack/heat/tree/heat/
> engine/resources/openstack/neutron/lbaas/listener.py#n183
>

 In Octavia, creating resources (listeners, pools, etc.) is an async
operation which it wouldn't raise any exception.
A normal workflow is:
1. heat/neutron client send a create api to Octavia
2. Octavia return a response to client and set the resource to
PENDING_CREATE (no exception will throw to client if the api goes through.)
3. It creation succeeds, Octavia set that resource to ACTIVE; otherwise,
set it to ERROR.

Please correct me if I am wrong.

I will go ahead to raise a bug later if both of you think it necessary.

Thanks,
Jiahao Liang

>
>> 2. As heat engine wouldn't know the Listener's creation failure, it will
>>> continue to create Pool\Member\Heatthmonitor on top of an Listener which
>>> actually doesn't exist. It causes a few undefined behaviors.  As a
>>> result, those LBaaS resources in ERROR state are unable to be cleaned up
>>> with either normal neutron or heat api.
>>>
>>>
>>> Is this a bug regarding LBaaS V2 for heat, or is it designed that way on
>>> purpose?  In my opinion, it would be more natural if heat reports
>>> CREATION_FAILURE if any of the LBaaS resources fails.
>>>
>>> Thanks,
>>> Jiahao Liang
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Community Contributor Awards

2016-09-27 Thread Tom Fifield

Time is running out to nominate for an award!!

https://openstackfoundation.formstack.com/forms/community_contributor_award_nomination_form

On 21/09/16 02:43, Kendall Nelson wrote:

Hello all,


I’m pleased to announce the next round of community contributor awards!
Similar to the Austin Summit, the awards will be presented by the
Foundation at the feedback session of the upcoming Summit in Barcelona.


Now accepting nominations! Please submit anyone you think is deserving
of an award!


https://openstackfoundation.formstack.com/forms/community_contributor_award_nomination_form


Please submit all nominations by the end of day on October 7th.


There are so many people out there who do invaluable work that should be
recognized. People that hold the community together, people that make
working on OpenStack fun, people that do a lot but aren’t called out for
their work, people that speak their mind and aren’t afraid to challenge
the norm.


Like last time, we won’t have a defined set of awards so we take extra
note of what you say about the nominee in your submission to pick the
winners.


We’re excited to hear who you want to celebrate and why you think they
are awesome!


All the Best,


Kendall Nelson (diablo_rojo)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-27 Thread Matt Riedemann

On 9/20/2016 11:16 AM, Daniel P. Berrange wrote:


NB the bug was non-deterministic and rare, even in the gate, so the
real test is whether it gets past the gate 20 times in a row :-)

Regards,
Daniel



It was about a 25% job failure rate in the gate when we disabled live 
snapshots with the workaround bug, so maybe rare outside of the gate, 
but definitely not rare in the gate with libvirt 1.2.2.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Retiring stale stadium projects

2016-09-27 Thread Hirofumi Ichihara



On 2016/09/28 10:18, Armando M. wrote:

Hi Neutrinos,

I wanted to double check with you the state of these following projects:

- networking-ofagent
I think that ofagent is ready for retirement. I have seen the 
declaration as "OFAgent is decomposed and deprecated in the Mitaka 
cycle." in Mitaka release note.



- python-neutron-pd-driver

It's my understanding that they are ready for retirement or 
thereabouts. Please confirm, and I'll kick off the countdown sequence [1].


Cheers,
Armando

[1] http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Retiring stale stadium projects

2016-09-27 Thread fumihiko kakuma
Hi Armando,

Thank you for your cheking.

networking-ofagent are ready for retirement in the Newton.

But ryu-team still will maintain stable/liberty and stable/mitaka
release.
So we will keep repository till EOL of mitaka.
Also zuul and requirement bot will not run for master branch of ofagent
[1].

regards,
fumuhiko kakuma


[1]https://review.openstack.org/#/c/303121/
   https://review.openstack.org/#/c/298107/


On Tue, 27 Sep 2016 18:18:37 -0700
"Armando M."  wrote:

> Hi Neutrinos,
> 
> I wanted to double check with you the state of these following projects:
> 
> - networking-ofagent
> - python-neutron-pd-driver
> 
> It's my understanding that they are ready for retirement or thereabouts.
> Please confirm, and I'll kick off the countdown sequence [1].
> 
> Cheers,
> Armando
> 
> [1] http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

-- 
fumihiko kakuma 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Retiring stale stadium projects

2016-09-27 Thread Anita Kuno

On 16-09-27 09:18 PM, Armando M. wrote:

Hi Neutrinos,

I wanted to double check with you the state of these following projects:

- networking-ofagent
- python-neutron-pd-driver

It's my understanding that they are ready for retirement or thereabouts.
Please confirm, and I'll kick off the countdown sequence [1].

Cheers,
Armando

[1] http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project


Please remove the dot files on the first round of the "removing project 
content" step. It often gets missed.


Thank you,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][neutron] networking-ovn Newton RC2 available

2016-09-27 Thread Davanum Srinivas
Hello everyone,

The release candidate for networking-ovn for the end of the Newton
cycle is available! You can find the RC2 source code tarball at:

https://tarballs.openstack.org/networking-ovn/networking-ovn-1.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this RC2 will be formally released as the final
Newton release on 6 October. You are therefore strongly encouraged to
test and validate this tarball!

Alternatively, you can directly test the stable/newton release branch at:

http://git.openstack.org/cgit/openstack/networking-ovn/log/?h=stable/newton

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/networking-ovn/+filebug

and tag it *newton-rc-potential* to bring it to the networking-ovn
release crew's attention.


Thanks,
Dims (On behalf of the Release Team)

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-27 Thread Rabi Mishra
On Wed, Sep 28, 2016 at 1:01 AM, Zane Bitter  wrote:

> On 27/09/16 15:11, Jiahao Liang wrote:
>
>> Hello all,
>>
>> I am trying to use heat to launch lb resources with Octavia as backend.
>> The template I used is
>> from https://github.com/openstack/heat-templates/blob/master/hot/
>> lbaasv2/lb_group.yaml.
>>
>> Following are a few observations:
>>
>> 1. Even though Listener was created with ERROR status, heat will still
>> go ahead and mark it Creation Complete. As in the heat code, it only
>> check whether root Loadbalancer status is change from PENDING_UPDATE to
>> ACTIVE. And Loadbalancer status will be changed to ACTIVE anyway no
>> matter Listener's status.
>>
>
> That sounds like a clear bug.
>

It seems we're checking for any exceptions from the client[1], before
checking for the
loadbalancer status. I could not see any other way to check the listener
status afterwards.
Probably a lbaas bug with octavia driver?

Could you please raise a bug with the heat/lbaas logs?

[1]
https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/openstack/neutron/lbaas/listener.py#n183

>
> 2. As heat engine wouldn't know the Listener's creation failure, it will
>> continue to create Pool\Member\Heatthmonitor on top of an Listener which
>> actually doesn't exist. It causes a few undefined behaviors.  As a
>> result, those LBaaS resources in ERROR state are unable to be cleaned up
>> with either normal neutron or heat api.
>>
>>
>> Is this a bug regarding LBaaS V2 for heat, or is it designed that way on
>> purpose?  In my opinion, it would be more natural if heat reports
>> CREATION_FAILURE if any of the LBaaS resources fails.
>>
>> Thanks,
>> Jiahao Liang
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Retiring stale stadium projects

2016-09-27 Thread Armando M.
Hi Neutrinos,

I wanted to double check with you the state of these following projects:

- networking-ofagent
- python-neutron-pd-driver

It's my understanding that they are ready for retirement or thereabouts.
Please confirm, and I'll kick off the countdown sequence [1].

Cheers,
Armando

[1] http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] install guide has moved

2016-09-27 Thread Loo, Ruby
Hi,

Thanks to the huge efforts put in by Mathieu Mitchell (mat128) and Jay Faulkner 
(JayF), we've moved ironic's install guide from the developer documentation to 
the official openstack site [1]. Isn't it a beauty? :D

Please update your bookmarks to point to the new location, and help us improve 
the install guide by providing feedback and submitting patches.

--ruby

[1] http://docs.openstack.org/project-install-guide/baremetal/draft/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][heat] heat Newton RC2 available

2016-09-27 Thread Doug Hellmann
Hello everyone,

A new release candidate for heat for the end of the Newton cycle
is available!  You can find the source code tarball at:

https://tarballs.openstack.org/heat/heat-7.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the final
Newton release on 6 October. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/heat/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/heat/+filebug

and tag it *newton-rc-potential* to bring it to the heat release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] install guide has moved

2016-09-27 Thread Andreas Jaeger
On 2016-09-27 16:54, Ruby Loo wrote:
> Hi,
> 
>  
> 
> Thanks to the huge efforts put in by Mathieu Mitchell (mat128) and Jay
> Faulkner (JayF), we've moved ironic's install guide from the developer
> documentation to the official openstack site [1]. Isn't it a beauty? :D
> 
>  
> 
> Please update your bookmarks to point to the new location, and help us
> improve the install guide by providing feedback and submitting patches.
> 
>  
> 
> --ruby
> 
>  
> 
> [1] http://docs.openstack.org/project-install-guide/baremetal/draft/

Be aware that this is the draft location - the version from master, so
this will soon be the Ocata version.

Once newton is released, docs.openstack.org will point to the Newton
version which is published from stable/newton branch already to:

http://docs.openstack.org/project-install-guide/baremetal/newton/

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] dhcp 'Address already in use' errors when trying to start a dnsmasq

2016-09-27 Thread Kevin Benton
There is no side effect other than log noise and a delayed reload? I don't
see why a revert would be appropriate.

I looked at the logs and the issue seems to be that the process isn't
tracked correctly the first time it starts.

grep for the following:

ea141299-ce07-4ff7-9a03-7a1b7a75a371', 'dnsmasq'

in
http://logs.openstack.org/26/377626/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b6953d4/logs/screen-q-dhcp.txt.gz

The first time dnsmasq is called it gives a 0 return code but the agent
doesn't seem to get a pid for it. So the next time it is called it
conflicts with the running proc.

On Sep 27, 2016 11:22, "Ihar Hrachyshka"  wrote:

> Hi all,
>
> so we started getting ‘Address already in use’ when trying to start
> dnsmasq after the previous instance of the process is killed with kill -9.
> Armando spotted it today in logs for: https://review.openstack.org/#
> /c/377626/ but as per logstash it seems like an error we saw before (the
> earliest I see is 9/20), f.e.:
>
> http://logs.openstack.org/26/377626/1/check/gate-tempest-dsv
> m-neutron-full-ubuntu-xenial/b6953d4/logs/screen-q-dhcp.txt.gz
>
> Assuming I understand the flow of the failure, it runs as follows:
>
> - sync_state starts dnsmasq per network;
> - after agent lock is freed, some other notification event
> (port_update/subnet_update/...) triggers restart for one of the processes;
> - the restart is done not via reload_allocations (-SIGHUP) but thru
> restart/disable (kill -9);
> - once the old dnsmasq is killed with -9, we attempt to start a new
> process with new config files generated and fail with: “dnsmasq: failed to
> create listening socket for 10.1.15.242: Address already in use”
> - surprisingly, after several failing attempts to start the process, it
> succeeds to start it after a bunch of seconds and runs fine.
>
> It looks like once we kill the process with -9, it may hold for the socket
> resource for some time and may clash with the new process we try to spawn.
> It’s a bit weird because dnsmasq should have set REUSEADDR for the socket,
> so a new process should have started just fine.
>
> Lately, we landed several patches that touched reload logic for DHCP agent
> on notifications. Among those suspicious in the context are:
>
> - https://review.openstack.org/#/c/372595/ - note it requests ‘disable’
> (-9) where it was using ‘reload_allocations’ (-SIGHUP) before, and it also
> does not unplug the port on lease release (maybe after we rip of the
> device, the address clash with the old dnsmasq state is gone even though
> the ’new’ port will use the same address?).
> - https://review.openstack.org/#/c/372236/6 - we were requesting
> reload_allocations in some cases before, and now we put the network into
> resync queue
>
> There were other related changes lately, you can check history of Kevin’s
> changes for the branch, it should capture most of them.
>
> I wonder whether we hit some long standing restart issue with dnsmasq here
> that was just never triggered before because we were not calling kill -9 so
> eagerly as we do now.
>
> Note: Jakub Libosvar validated that 'kill -9 && dnsmasq’ in loop does NOT
> result in the failure we see in gate logs.
>
> We need to understand what’s going with the failure, and come up with some
> plan for Newton. We either revert suspected patches as I believe Armando
> proposed before, but then it’s not clear until which point to do it; or we
> come up with some smart fix for that, that I don’t immediately grasp.
>
> I will be on vacation tomorrow, though I will check the email thread to
> see if we have a plan to act on. I really hope folks give the issue a
> priority since it seems like we buried ourselves under a pile of
> interleaved patches and now we don’t have a clear view of how to get out of
> the pile.
>
> Cheers,
> Ihar
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-27 Thread Jiahao Liang
Hello all,

I am trying to use heat to launch lb resources with Octavia as backend. The
template I used is from
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
.

Following are a few observations:

1. Even though Listener was created with ERROR status, heat will still go
ahead and mark it Creation Complete. As in the heat code, it only check
whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
And Loadbalancer status will be changed to ACTIVE anyway no matter
Listener's status.


2. As heat engine wouldn't know the Listener's creation failure, it will
continue to create Pool\Member\Heatthmonitor on top of an Listener which
actually doesn't exist. It causes a few undefined behaviors.  As a result,
those LBaaS resources in ERROR state are unable to be cleaned up
with either normal neutron or heat api.


Is this a bug regarding LBaaS V2 for heat, or is it designed that way on
purpose?  In my opinion, it would be more natural if heat reports
CREATION_FAILURE if any of the LBaaS resources fails.

Thanks,
Jiahao Liang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2016-09-27 Thread Sean Dague
I'd like to throw my hat into the ring for the TC. I've been involved
in OpenStack since 2012. We may have interacted when working on Nova,
Devstack, Grenade, debugging the gate, or other areas of the broader
OpenStack project.

Upgrades are one of the things I'm passionate about in
OpenStack. The kinds of places OpenStack gets installed into don't
always have a change window. Making upgrades painless and boring are
a precondition for anything else OpenStack wants to do, because if
operators don't upgrade their OpenStack environments, they'll never
get any of the new enhancements we are building as a community.

I really want the end user experience with OpenStack to be better. In
the last cycle I helped spear head the api-ref effort. In a single
cycle our API docs jumped forward to an accuracy level we haven't seen
in years. This was a great community win. I've been championing
getting rid of the extensions mechanism inside Nova, to make it clear
that there is a single compute API, which we expect to see
everywhere.

Having worked on many efforts within projects, and across projects, I
think I bring a perspective to the TC about where process meets
reality in getting things accomplished. We always need to take steps
forward to advance the agenda, but if we make the steps too high, we
can set folks up for failure. Attainable forward progress creates
success, success creates momentum, and momentum makes more progress
possible.

I would be honored to serve on the TC again if chosen.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][horizon] django_openstack_auth 2.4.1 release (newton)

2016-09-27 Thread no-reply
We are chuffed to announce the release of:

django_openstack_auth 2.4.1: Django authentication backend for use
with OpenStack Identity

This release is part of the newton stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/django_openstack_auth/

With package available at:

https://pypi.python.org/pypi/django_openstack_auth

Please report issues through launchpad:

https://bugs.launchpad.net/django-openstack-auth

For more details, please see below.

Changes in django_openstack_auth 2.4.0..2.4.1
-

61500ab Revert "Add is_authenticated and is_anonymous properties"
7091219 Imported Translations from Zanata
03a6db3 Add is_authenticated and is_anonymous properties
159e9aa Fix wrong warning about keystone version
321b1eb Update .gitreview for stable/newton
b3f99aa Updated from global requirements
9ae513a Imported Translations from Zanata
d9f9df3 Correctly initialize TestResponses


Diffstat (except docs and test files)
-

.gitreview|  1 +
openstack_auth/locale/de/LC_MESSAGES/django.po|  8 +--
openstack_auth/locale/en_AU/LC_MESSAGES/django.po | 10 ++-
openstack_auth/locale/fr/LC_MESSAGES/django.po| 12 ++--
openstack_auth/locale/id/LC_MESSAGES/django.po| 78 +++
openstack_auth/locale/ja/LC_MESSAGES/django.po| 13 ++--
openstack_auth/locale/ko_KR/LC_MESSAGES/django.po | 20 +++---
openstack_auth/utils.py   |  2 +-
requirements.txt  |  2 +-
10 files changed, 117 insertions(+), 31 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 59bbca7..20444d8 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.policy>=1.9.0 # Apache-2.0
-python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0
+python-keystoneclient!=2.1.0,>=2.0.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-09-27 Thread Artom Lifshitz
By coincidence I've just written up a spec [1] that proposes an
admittedly very generic mechanism to solve this problem. I was coming
at it from the perspective of keeping the relative order of PCI device
addresses constant across evacuations. In that spec, I propose letting
the virt driver store blobs of data in the database (with some
caveats, obviously). This can then be used, for example, by the
libvirt driver to persist instance XML throughout an instance's
lifetime.

I agree with Dan Berranage that it's an overkill solution - we can
simply use libvirt itself as the storage mechanism and not have this
duplicate blob floating in the database confusing us about which
source of truth do we really want to use - except for the evacuation
edge case. When evacuating, the source host is unavailable, thus we're
unable to retrieve the instance XML from it. Also, in my conversation
with Claudiu (and hopefully he can chime in here and confirm I'm not
putting words in his mouth), the Hyper-V folks are potentially
interested in something like the driver private storage mechanism
proposed in the spec [1] to use with their new VM config export/import
feature [2].

[1] https://review.openstack.org/#/c/377806/
[2] https://review.openstack.org/#/c/340908/

On Tue, Sep 27, 2016 at 12:31 PM, Daniel P. Berrange
 wrote:
> On Tue, Sep 27, 2016 at 05:17:29PM +0100, Matthew Booth wrote:
>> Currently the libvirt driver (mostly) considers the nova db canonical. That
>> is, we can throw away libvirt's domain XML at any time and recreate it from
>> Nova. Anywhere that doesn't assume this is a bug, because whatever
>> direction we choose we don't need 2 different sources of truth. The
>> thinking behind this is that we should always know what we told libvirt,
>> and if we lose that information then that's a bug.
>>
>> This is true to a degree, and it's the reason I proposed the persistent
>> instance storage metadata spec: we lose track of how we configured an
>> instance's storage. I realised recently that this isn't the whole story,
>> though. Libvirt also automatically creates a bunch of state for us which we
>> didn't specify explicitly. We lose this every time we drop it and recreate.
>> For example, consider device addressing and ordering:
>>
>> $ nova boot ...
>>
>> We tell libvirt to give us a root disk, config disk, and a memballoon
>> device (amongst other things).
>>
>> Libvirt assigns pci addresses to all of these things.
>>
>> $ nova volume-attach ...
>>
>> We tell libvirt to create a new disk attached to the given volume.
>>
>> Libvirt assigns it a pci address.
>>
>> $ nova reboot
>>
>> We throw away libvirt's domain xml and create a new one from scratch.
>>
>> Libvirt assigns new addresses for all of these devices.
>>
>> Before reboot, the device order was: root disk, config disk, memballoon,
>> volume. After reboot the device order is: root disk, volume, config disk,
>> memballoon. Not only have all our devices changed address, which makes
>> Windows sad and paranoid about its licensing, and causes it to offline
>> volumes under certain circumstances, but our disks have been reordered.
>
> It is worth pointing out that we do have the device metadata role
> tagging support now, which lets guest OS identify devices automatically
> at startup. In theory you could say guests should rely on using that
> on *every* boot, not merely the first boot after provisioning.
>
> I think there is reasonable case to be made, however, that we should
> maintain a stable device configuration for an instance after its
> initial boot attempt. Arbitrarily changing hardware config on every
> reboot is being gratuitously nasty to guest admins. The example about
> causing Windows to require license reaactivation is on its own, enough
> of a reason to ensure stable hardware once initial provisioning is
> done.
>
>
>> This isn't all we've thrown away, though. Libvirt also gave us a default
>> machine type. When we create a new domain we'll get a new default machine
>> type. If libvirt has been upgraded, eg during host maintenance, this isn't
>> necessarily what it was before. Again, this can make our guests sad. Same
>> goes for CPU model, default devices, and probably many more things I
>> haven't thought of.
>
> Yes indeed.
>
>> Also... we lost the storage configuration of the guest: the information I
>> propose to persist in persistent instance storage metadata.
>>
>> We could store all of this information in Nova, but with the possible
>> exception of storage metadata it really isn't at the level of 'management':
>> it's the minutia of the hypervisor. In order to persist all of these things
>> in Nova we'd have to implement them explicitly, and when libvirt/kvm grows
>> more stuff we'll have to do that too. We'll need to mirror the
>> functionality of libvirt in Nova, feature for feature. This is a red flag
>> for me, and I think it means we should switch to libvirt being canonical.
>>
>> I think we should be able to 

Re: [openstack-dev] [all][elections][TC] TC Candidacy

2016-09-27 Thread Zane Bitter

On 27/09/16 06:19, John Davidge wrote:

Having Stackforge as a separate Github organization and set of
>repositories was a maintenance nightmare due to the awkwardness of
>renaming projects when they "moved into OpenStack".

There's no reason that this would need a separate github structure, just
separate messaging and rules.


That's exactly what we have now.

This statement on your blog:

"[StackForge] was retired in October 2015, at which point all projects 
had to move into the OpenStack Big Tent or leave entirely."


is completely false. That never happened. There are still plenty of 
repos on git.openstack.org that are not part of the Big Tent. At no time 
has any project been required to join the Big Tent in order to continue 
being hosted.


Maybe you should consider reading up on the historical background to 
these changes. There are a lot of constraints that have to be met - from 
technical ones like the fact that it's not feasible to rename git repos 
when they move into or out of the official OpenStack project, to legal 
ones like how the TC has to designate projects in order to trigger 
certain rights and responsibilities in the (effectively immutable) 
Foundation by-laws. Rehashing all of the same old discussions without 
reference to these constraints is unlikely to be productive.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-09-27 Thread Matthew Booth
Currently the libvirt driver (mostly) considers the nova db canonical. That
is, we can throw away libvirt's domain XML at any time and recreate it from
Nova. Anywhere that doesn't assume this is a bug, because whatever
direction we choose we don't need 2 different sources of truth. The
thinking behind this is that we should always know what we told libvirt,
and if we lose that information then that's a bug.

This is true to a degree, and it's the reason I proposed the persistent
instance storage metadata spec: we lose track of how we configured an
instance's storage. I realised recently that this isn't the whole story,
though. Libvirt also automatically creates a bunch of state for us which we
didn't specify explicitly. We lose this every time we drop it and recreate.
For example, consider device addressing and ordering:

$ nova boot ...

We tell libvirt to give us a root disk, config disk, and a memballoon
device (amongst other things).

Libvirt assigns pci addresses to all of these things.

$ nova volume-attach ...

We tell libvirt to create a new disk attached to the given volume.

Libvirt assigns it a pci address.

$ nova reboot

We throw away libvirt's domain xml and create a new one from scratch.

Libvirt assigns new addresses for all of these devices.

Before reboot, the device order was: root disk, config disk, memballoon,
volume. After reboot the device order is: root disk, volume, config disk,
memballoon. Not only have all our devices changed address, which makes
Windows sad and paranoid about its licensing, and causes it to offline
volumes under certain circumstances, but our disks have been reordered.

This isn't all we've thrown away, though. Libvirt also gave us a default
machine type. When we create a new domain we'll get a new default machine
type. If libvirt has been upgraded, eg during host maintenance, this isn't
necessarily what it was before. Again, this can make our guests sad. Same
goes for CPU model, default devices, and probably many more things I
haven't thought of.

Also... we lost the storage configuration of the guest: the information I
propose to persist in persistent instance storage metadata.

We could store all of this information in Nova, but with the possible
exception of storage metadata it really isn't at the level of 'management':
it's the minutia of the hypervisor. In order to persist all of these things
in Nova we'd have to implement them explicitly, and when libvirt/kvm grows
more stuff we'll have to do that too. We'll need to mirror the
functionality of libvirt in Nova, feature for feature. This is a red flag
for me, and I think it means we should switch to libvirt being canonical.

I think we should be able to create a domain, but once created we should
never redefine a domain. We can do adding and removing devices dynamically
using libvirt's apis, secure in the knowledge that libvirt will persist
this for us. When we upgrade the host, libvirt can ensure we don't break
guests which are on it. Evacuate should be pretty much the only reason to
start again.

This would potentially obsolete my persistent instance metadata spec, and
the libvirt stable rescue spec, as well as this one:
https://review.openstack.org/#/c/347161/ .

I raised this in the live migration sub-team meeting, and the immediate
response was understandably conservative. I think this solves more problems
than it creates, though, and it would result in Nova's libvirt driver
getting a bit smaller and a bit simpler. That's a big win in my book.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Timur Nurlygayanov
Hi milan,

we have measured the test coverage for OpenStack components with
coverage.py tool [1]. It is very easy tool and it allows measure the
coverage by lines of code and etc. (several metrics are available).

[1] https://coverage.readthedocs.io/en/coverage-4.2/

On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier 
wrote:

> Hi,
>
> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>
>> Dear Stackers,
>> I'd like to gather some overview on the $Sub: is there some
>> infrastructure in place to gather such stats? Are there any groups
>> interested in it? Any plans to establish such infrastructure?
>>
> I am working on such a tool with mixed results so far. Here's my approach
> taking let's say Nova as an example:
>
> 1) Print all the routes known to nova (available as a python-routes
> object:  nova.api.openstack.compute.APIRouterV21())
> 2) "Normalize" the Nova routes
> 3) Take the logs produced by Tempest during a tempest run (in
> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
> 8774)
> 4) "Normalize" the tested-by-tempest Nova routes.
> 5) Compare the two sets of routes
> 6) 
> 7) Profit !!
>
> So the hard part is obviously the normalizing of the URLs. I am currently
> using a tons of regex :) That's not fun.
>
> I'll let you guys know if I have something to show.
>
> I think there's real interest on the topic (it comes up every year or so),
> but no definitive answer/tool.
>
> Cheers,
> Jordan
>
>
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Timur,
Senior QA Manager
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections][TC] TC Candidacy

2016-09-27 Thread Sean Dague
On 09/27/2016 06:19 AM, John Davidge wrote:

>> Hmm, I disagree about that. I think that experience actually *has* shown
>> us that there is a single set of rules that can/should be applied to all
>> projects that wish to be called an OpenStack project.
> 
> We may have to agree to disagree here. Look at recent efforts to enforce
> python 3 compatibility, for example. Some projects had reasons why they
> didn't want to, others had reasons why they couldn't, and some simply
> didn't view it as a priority. We'd be much more productive in defining and
> enforcing rules like this if there was a narrower scope of projects they
> applied to.

To clarify on this point, the main projects that said this probably
wasn't doable in the way first proposed, were within the smaller tent
that you defined earlier.

The reason this particular goal is challenging isn't really the big
tent, it's the legacy that larger projects carry forward, which just
means the work takes more than a cycle to do.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-09-27 Thread Daniel P. Berrange
On Tue, Sep 27, 2016 at 05:17:29PM +0100, Matthew Booth wrote:
> Currently the libvirt driver (mostly) considers the nova db canonical. That
> is, we can throw away libvirt's domain XML at any time and recreate it from
> Nova. Anywhere that doesn't assume this is a bug, because whatever
> direction we choose we don't need 2 different sources of truth. The
> thinking behind this is that we should always know what we told libvirt,
> and if we lose that information then that's a bug.
> 
> This is true to a degree, and it's the reason I proposed the persistent
> instance storage metadata spec: we lose track of how we configured an
> instance's storage. I realised recently that this isn't the whole story,
> though. Libvirt also automatically creates a bunch of state for us which we
> didn't specify explicitly. We lose this every time we drop it and recreate.
> For example, consider device addressing and ordering:
> 
> $ nova boot ...
> 
> We tell libvirt to give us a root disk, config disk, and a memballoon
> device (amongst other things).
> 
> Libvirt assigns pci addresses to all of these things.
> 
> $ nova volume-attach ...
> 
> We tell libvirt to create a new disk attached to the given volume.
> 
> Libvirt assigns it a pci address.
> 
> $ nova reboot
> 
> We throw away libvirt's domain xml and create a new one from scratch.
> 
> Libvirt assigns new addresses for all of these devices.
> 
> Before reboot, the device order was: root disk, config disk, memballoon,
> volume. After reboot the device order is: root disk, volume, config disk,
> memballoon. Not only have all our devices changed address, which makes
> Windows sad and paranoid about its licensing, and causes it to offline
> volumes under certain circumstances, but our disks have been reordered.

It is worth pointing out that we do have the device metadata role
tagging support now, which lets guest OS identify devices automatically
at startup. In theory you could say guests should rely on using that
on *every* boot, not merely the first boot after provisioning.

I think there is reasonable case to be made, however, that we should
maintain a stable device configuration for an instance after its
initial boot attempt. Arbitrarily changing hardware config on every
reboot is being gratuitously nasty to guest admins. The example about
causing Windows to require license reaactivation is on its own, enough
of a reason to ensure stable hardware once initial provisioning is
done.


> This isn't all we've thrown away, though. Libvirt also gave us a default
> machine type. When we create a new domain we'll get a new default machine
> type. If libvirt has been upgraded, eg during host maintenance, this isn't
> necessarily what it was before. Again, this can make our guests sad. Same
> goes for CPU model, default devices, and probably many more things I
> haven't thought of.

Yes indeed.

> Also... we lost the storage configuration of the guest: the information I
> propose to persist in persistent instance storage metadata.
> 
> We could store all of this information in Nova, but with the possible
> exception of storage metadata it really isn't at the level of 'management':
> it's the minutia of the hypervisor. In order to persist all of these things
> in Nova we'd have to implement them explicitly, and when libvirt/kvm grows
> more stuff we'll have to do that too. We'll need to mirror the
> functionality of libvirt in Nova, feature for feature. This is a red flag
> for me, and I think it means we should switch to libvirt being canonical.
> 
> I think we should be able to create a domain, but once created we should
> never redefine a domain. We can do adding and removing devices dynamically
> using libvirt's apis, secure in the knowledge that libvirt will persist
> this for us. When we upgrade the host, libvirt can ensure we don't break
> guests which are on it. Evacuate should be pretty much the only reason to
> start again.

And in fact we do persist the guest XML with libvirt already. We sadly
never use that info though - we just blindly overwrite it every time
with newly generated XML.

Fixing this should not be technically difficult for the most part.

> I raised this in the live migration sub-team meeting, and the immediate
> response was understandably conservative. I think this solves more problems
> than it creates, though, and it would result in Nova's libvirt driver
> getting a bit smaller and a bit simpler. That's a big win in my book.

I don't think it'll get significantly smaller/simpler, but it will
definitely be more intelligent and user friendly to do this IMHO.
As mentioned above, I think the windows license reactivation issue
alone is enough of a reason todo this.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org 

Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-09-27 Thread Chris Friesen

On 09/27/2016 10:17 AM, Matthew Booth wrote:


I think we should be able to create a domain, but once created we should never
redefine a domain. We can do adding and removing devices dynamically using
libvirt's apis, secure in the knowledge that libvirt will persist this for us.
When we upgrade the host, libvirt can ensure we don't break guests which are on
it. Evacuate should be pretty much the only reason to start again.


Sounds interesting.  How would you handle live migration?

Currently we regenerate the XML file on the destination from the nova DB.  I 
guess in your proposal we'd need some way of copying the XML file from the 
source to the dest, and then modifying the appropriate segments to adjust things 
like CPU/NUMA pinning?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Version 1.2

2016-09-27 Thread Hayes, Graham
On 27/09/2016 01:36, Ryan Petrello wrote:
> Apologies for the trouble this caused.  As Dave mentioned, this change
> warranted a new major version of pecan, and I missed it.  I've reverted the
> offending commit and re-released a new version of pecan (1.2.1) to PyPI:
>
> https://github.com/pecan/pecan/commit/4cfe319738304ca5dcc97694e12b3d2b2e24b1bb
> https://github.com/pecan/pecan/commit/b3699aeae1f70b223a84308894523a64ede2b083
> https://pypi.python.org/pypi/pecan/1.2.1
>
> Once the dust settles in a few days, I'll re-release the new functionality in
> a major point release of pecan.
>
> On 09/26/16 09:21 PM, Dave McCowan (dmccowan) wrote:
>>
>> The Barbican project uses Pecan as our web framework.
>>
>> At some point recently, OpenStack started picking up their new version 1.2.  
>> This version [1] changed one of their APIs such that certain calls that used 
>> to return 200 now return 204.  This has caused immediate problems for 
>> Barbican (our gates for /master, stable/newton, and stable/mitaka all fail) 
>> and a potential larger impact (changing the return code of REST calls is not 
>> acceptable for a stable API).
>>
>> Before I start hacking three releases of Barbican to work around Pecan's 
>> change, I'd like to ask:  are any other projects having trouble with
>> Pecan Version 1.2?  Would it be possible/appropriate to block this version 
>> as not working for OpenStack?
>>
>> Thanks,
>> Dave McCowan
>>
>>
>> [1]
>> http://pecan.readthedocs.io/en/latest/changes.html
>> https://github.com/pecan/pecan/issues/72
>>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Thanks Ryan

Designate hit a small issue as well, so I proposed
https://review.openstack.org/377702 to allow 1.2.1
be installed, and block 1.2.

Its been approved, so it should be working its way to a
repo near you soon.

Thanks,

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Parse ISO8601 (open) time intervals

2016-09-27 Thread Doug Hellmann
Excerpts from milanisko k's message of 2016-09-27 12:30:09 +:
> Hello Stackers!
> 
> The ironic inspector project keeps track of introspection finished_at time
> stamps.
> We're just discussing how to reasonably query time ranges over the API[1]
> to serve matching introspection statuses to the user.
> Wikipedia[2] mentions the ISO8601 time interval specification (and there
> are open-interval extensions to that).
> It would be nice to be able to specify a query like :
>  /v1/introspection?finished_at=2016:09:27:14:17/PT1H
> to fetch all introspection statuses that finished within 1hour around 14:17
> Today,
> or to be able to state an open-ended interval:
> /v1/introspection?finished_at=2016:09:27:14:17/
> but oslo_utils.timeutils lacks parsing support for ISO8061 time intervals.
> 
> I'd like to ask whether other projects need to parse time intervals and/or
> how do they achieve that.
> 
> Thanks!
> milan
> 
> [1]
> https://review.openstack.org/#/c/375045/3/specs/list-introspection-statuses.rst
> [2] https://en.wikipedia.org/wiki/ISO_8601#Time_intervals

You may want to have a look at the dateutil library.
https://dateutil.readthedocs.io/en/stable/

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Sean Dague
On 09/27/2016 11:36 AM, Andrew Laski wrote:
> Hello all,
> 
> Recently I noticed that people would look at logs from a Zuul née
> Jenkins CI run and comment something like "there seem to be more
> warnings in here than usual." And so I thought it might be nice to
> quantify that sort of thing so we didn't have to rely on gut feelings.
> 
> So I threw together https://review.openstack.org/#/c/376531 which is a
> script that lives in the Nova tree, gets called from a devstack-gate
> post_test_hook, and outputs an n-stats.json file which can be seen at
> http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
> This provides just a simple way to compare two runs and spot large
> changes between them. Perhaps later things could get fancy and these
> stats could be tracked over time. I am also interested in adding stats
> for things that are a bit project specific like how long (max, min, med)
> it took to boot an instance, or what's probably better to track is how
> many operations that took for some definition of an operation.
> 
> I received some initial feedback that this might be a better fit in the
> os-loganalyze project so I took a look over there. So I cloned the
> project to take a look and quickly noticed
> http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
> That makes me think it would not be a good fit there because what I'm
> looking to do relies on parsing the full file, or potentially multiple
> files, in order to get useful data.
> 
> So my questions: does this seem like a good fit for os-loganalyze? If
> not is there another infra/QA project that this would be a good fit for?
> Or would people be okay with a lone project like Nova implementing this
> in tree for their own use?

Some things to keep in mind:

post_test_hook was really designed as a way for dedicated project
specific jobs to do something special instead of tempest. It currently
only can exist once in a job. While code in nova taking ownership for
nova only jobs is totally fine, it's a bit weird if the intent is to use
this on the integrated gate jobs that are shared between a bunch of
projects.

If we think that other projects are going to want to do similar things,
starting with a collaboration space for that up front would be useful.
Especially as bunch of it is going to be some shared log parsing.

If we think other projects want to do similar things, we can also move
to putting down json logs in the devstack runs in the gate, which would
make the parsing less guess work.

One of the reasons I had suggested something like dynamicly doing this
with os-loganalyze is that it provides the ability to ask a new question
of old data, on all the data that exists. The experience with Elastic
Recheck has been that odd regressions happen when no one is looking,
then being able to go back through history is really useful. That is one
mechanism to do it on demand.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Andrew Laski


On Tue, Sep 27, 2016, at 11:59 AM, Ian Cordasco wrote:
> On Tue, Sep 27, 2016 at 10:36 AM, Andrew Laski  wrote:
> > Hello all,
> >
> > Recently I noticed that people would look at logs from a Zuul née
> > Jenkins CI run and comment something like "there seem to be more
> > warnings in here than usual." And so I thought it might be nice to
> > quantify that sort of thing so we didn't have to rely on gut feelings.
> >
> > So I threw together https://review.openstack.org/#/c/376531 which is a
> > script that lives in the Nova tree, gets called from a devstack-gate
> > post_test_hook, and outputs an n-stats.json file which can be seen at
> > http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
> > This provides just a simple way to compare two runs and spot large
> > changes between them. Perhaps later things could get fancy and these
> > stats could be tracked over time. I am also interested in adding stats
> > for things that are a bit project specific like how long (max, min, med)
> > it took to boot an instance, or what's probably better to track is how
> > many operations that took for some definition of an operation.
> >
> > I received some initial feedback that this might be a better fit in the
> > os-loganalyze project so I took a look over there. So I cloned the
> > project to take a look and quickly noticed
> > http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
> > That makes me think it would not be a good fit there because what I'm
> > looking to do relies on parsing the full file, or potentially multiple
> > files, in order to get useful data.
> >
> > So my questions: does this seem like a good fit for os-loganalyze? If
> > not is there another infra/QA project that this would be a good fit for?
> > Or would people be okay with a lone project like Nova implementing this
> > in tree for their own use?
> 
> My first instinct was that this could be tracked along with the Health
> project but after a little thought log warnings aren't necessarily an
> indication of the health of the project.
> 
> It might make sense to start this out in Nova to see how useful it
> ends up being, but I can see Glance being interested in this at some
> point if it ends up being useful to Nova.

Yeah, starting it in Nova to gauge it's usefulness was my goal. I could
see growing a common framework out of this that projects could pull in,
like an oslo project though I'm not sure if oslo would be the right fit,
and then having each project provide specific parsers for their data.


> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections][TC] TC candidacy

2016-09-27 Thread Sean McGinnis
I would like to announce my candidacy for a position on the Technical
Committee.

I work for Dell EMC with over a decade (quite a bit over, but I don't want to
think about that) in storage and software development. I have been involved in
OpenStack since the Icehouse cycle and have served as the Cinder PTL since the
Mitaka release.

I think it's important to have active PTLs on the TC. TC decisions need to be
grounded in the reality of day to day project development. I think it will also
be good for me as a PTL to be forced to take a wider view of things across the
whole ecosystem.

I think outreach and education is important to spread interest in OpenStack and
provide awareness to reach new people. I've spoken at several Summits, as well
as OpenStack Days events and (more pertinent to Cinder) at Storage Network
Industry Association (SNIA) events.

I think it's important to get feedback from actual operators and end users. I
have tried to reach out to these users as well as attend the Ops Midcycle in
order to close that feedback loop.

I would continue to work towards these things and bring that feedback to the
TC - making sure the decisions we make have the end user in mind.

Another goal for me is simplicity. With the Big Tent, more interacting,
projects, and lot's of competing interests, things have gotten much more
complicated over the last several releases.

I say this while acknowledging within Cinder - while I have been PTL - a lot of
complexity has been added. In most cases there are very valid reasons for these
changes. So even with a desire to make things as simple as possible, I consider
myself a pragmatist and recognize that complexity is sometimes unavoidable in
order to move forward. But one thing I would try to focus on as a TC member
would be to reduce complexity anywhere it's possible and where it makes sense.

It would be an honor to serve as a member of the TC and help do whatever I can
to help the community continue to succeed and grow.

Thank you for your consideration.

Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Matthew Treinish
On Tue, Sep 27, 2016 at 11:36:07AM -0400, Andrew Laski wrote:
> Hello all,
> 
> Recently I noticed that people would look at logs from a Zuul née
> Jenkins CI run and comment something like "there seem to be more
> warnings in here than usual." And so I thought it might be nice to
> quantify that sort of thing so we didn't have to rely on gut feelings.
> 
> So I threw together https://review.openstack.org/#/c/376531 which is a
> script that lives in the Nova tree, gets called from a devstack-gate
> post_test_hook, and outputs an n-stats.json file which can be seen at
> http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
> This provides just a simple way to compare two runs and spot large
> changes between them. Perhaps later things could get fancy and these
> stats could be tracked over time. I am also interested in adding stats
> for things that are a bit project specific like how long (max, min, med)
> it took to boot an instance, or what's probably better to track is how
> many operations that took for some definition of an operation.
> 
> I received some initial feedback that this might be a better fit in the
> os-loganalyze project so I took a look over there. So I cloned the
> project to take a look and quickly noticed
> http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
> That makes me think it would not be a good fit there because what I'm
> looking to do relies on parsing the full file, or potentially multiple
> files, in order to get useful data.
> 
> So my questions: does this seem like a good fit for os-loganalyze? If
> not is there another infra/QA project that this would be a good fit for?
> Or would people be okay with a lone project like Nova implementing this
> in tree for their own use?
> 

I think having this in os-loganalyze makes sense since we use that for
visualizing the logs already. It also means we get it for free on all the log
files. But, if it's not a good fit for a technical reason then I think creating
another small tool under QA or infra would be a good path forward. Since there
really isn't anything nova specific in that.

I would caution against doing it as a one off in a project repo doesn't seem
like the best path forward for something like this. We actually tried to do
something similar to that in the past inside the tempest repo:

http://git.openstack.org/cgit/openstack/tempest/tree/tools/check_logs.py

and

http://git.openstack.org/cgit/openstack/tempest/tree/tools/find_stack_traces.py

all it did was cause confusion because no one knew where the output was coming
from. Although, the output from those tools was also misleading, which was
likely a bigger problm. So this probably won't be an issue if you add a json
output to the jobs.

I also wonder if the JSONFormatter from oslo.log:

http://docs.openstack.org/developer/oslo.log/api/formatters.html#oslo_log.formatters.JSONFormatter

would be useful here. We can proabbly turn that on if it makes things easier.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-27 Thread Davanum Srinivas
Sorry for the top post - fyi, i've submitted a review for OpenStackSalt
https://review.openstack.org/#/c/377906/

-- Dims

On Mon, Sep 26, 2016 at 2:58 AM, Flavio Percoco  wrote:
> On 22/09/16 17:15 -0400, Anita Kuno wrote:
>>
>> On 16-09-21 01:11 PM, Doug Hellmann wrote:
>>>
>>> Excerpts from Clint Byrum's message of 2016-09-21 08:56:24 -0700:

 I think it might also be useful if we could make the meeting bot remind
 teams of any pending actions they need to take such as elections upon
 #startmeeting.
>>>
>>> I could see that being useful, yes.
>>>
>> I am not convinced this situation arose due to lack of available
>> information.
>
>
> You may be right here but I don't think having other means to spread this
> information is a bad thing, if there's a way to automate this, of course.
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Andrew Laski


On Tue, Sep 27, 2016, at 12:39 PM, Matthew Treinish wrote:
> On Tue, Sep 27, 2016 at 11:36:07AM -0400, Andrew Laski wrote:
> > Hello all,
> > 
> > Recently I noticed that people would look at logs from a Zuul née
> > Jenkins CI run and comment something like "there seem to be more
> > warnings in here than usual." And so I thought it might be nice to
> > quantify that sort of thing so we didn't have to rely on gut feelings.
> > 
> > So I threw together https://review.openstack.org/#/c/376531 which is a
> > script that lives in the Nova tree, gets called from a devstack-gate
> > post_test_hook, and outputs an n-stats.json file which can be seen at
> > http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
> > This provides just a simple way to compare two runs and spot large
> > changes between them. Perhaps later things could get fancy and these
> > stats could be tracked over time. I am also interested in adding stats
> > for things that are a bit project specific like how long (max, min, med)
> > it took to boot an instance, or what's probably better to track is how
> > many operations that took for some definition of an operation.
> > 
> > I received some initial feedback that this might be a better fit in the
> > os-loganalyze project so I took a look over there. So I cloned the
> > project to take a look and quickly noticed
> > http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
> > That makes me think it would not be a good fit there because what I'm
> > looking to do relies on parsing the full file, or potentially multiple
> > files, in order to get useful data.
> > 
> > So my questions: does this seem like a good fit for os-loganalyze? If
> > not is there another infra/QA project that this would be a good fit for?
> > Or would people be okay with a lone project like Nova implementing this
> > in tree for their own use?
> > 
> 
> I think having this in os-loganalyze makes sense since we use that for
> visualizing the logs already. It also means we get it for free on all the
> log
> files. But, if it's not a good fit for a technical reason then I think
> creating
> another small tool under QA or infra would be a good path forward. Since
> there
> really isn't anything nova specific in that.

There's nothing Nova specific atm because I went for low hanging fruit.
But if the plan is to have Nova specific, Cinder specific, Glance
specific, etc... things in there do people still feel that a QA/infra
tool is the right path forward. That's my only hesitation here.

> 
> I would caution against doing it as a one off in a project repo doesn't
> seem
> like the best path forward for something like this. We actually tried to
> do
> something similar to that in the past inside the tempest repo:
> 
> http://git.openstack.org/cgit/openstack/tempest/tree/tools/check_logs.py
> 
> and
> 
> http://git.openstack.org/cgit/openstack/tempest/tree/tools/find_stack_traces.py
> 
> all it did was cause confusion because no one knew where the output was
> coming
> from. Although, the output from those tools was also misleading, which
> was
> likely a bigger problm. So this probably won't be an issue if you add a
> json
> output to the jobs.
> 
> I also wonder if the JSONFormatter from oslo.log:
> 
> http://docs.openstack.org/developer/oslo.log/api/formatters.html#oslo_log.formatters.JSONFormatter
> 
> would be useful here. We can proabbly turn that on if it makes things
> easier.
> 
> -Matt Treinish
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Andrew Laski
Hello all,

Recently I noticed that people would look at logs from a Zuul née
Jenkins CI run and comment something like "there seem to be more
warnings in here than usual." And so I thought it might be nice to
quantify that sort of thing so we didn't have to rely on gut feelings.

So I threw together https://review.openstack.org/#/c/376531 which is a
script that lives in the Nova tree, gets called from a devstack-gate
post_test_hook, and outputs an n-stats.json file which can be seen at
http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
This provides just a simple way to compare two runs and spot large
changes between them. Perhaps later things could get fancy and these
stats could be tracked over time. I am also interested in adding stats
for things that are a bit project specific like how long (max, min, med)
it took to boot an instance, or what's probably better to track is how
many operations that took for some definition of an operation.

I received some initial feedback that this might be a better fit in the
os-loganalyze project so I took a look over there. So I cloned the
project to take a look and quickly noticed
http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
That makes me think it would not be a good fit there because what I'm
looking to do relies on parsing the full file, or potentially multiple
files, in order to get useful data.

So my questions: does this seem like a good fit for os-loganalyze? If
not is there another infra/QA project that this would be a good fit for?
Or would people be okay with a lone project like Nova implementing this
in tree for their own use?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Ian Cordasco
On Tue, Sep 27, 2016 at 10:36 AM, Andrew Laski  wrote:
> Hello all,
>
> Recently I noticed that people would look at logs from a Zuul née
> Jenkins CI run and comment something like "there seem to be more
> warnings in here than usual." And so I thought it might be nice to
> quantify that sort of thing so we didn't have to rely on gut feelings.
>
> So I threw together https://review.openstack.org/#/c/376531 which is a
> script that lives in the Nova tree, gets called from a devstack-gate
> post_test_hook, and outputs an n-stats.json file which can be seen at
> http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
> This provides just a simple way to compare two runs and spot large
> changes between them. Perhaps later things could get fancy and these
> stats could be tracked over time. I am also interested in adding stats
> for things that are a bit project specific like how long (max, min, med)
> it took to boot an instance, or what's probably better to track is how
> many operations that took for some definition of an operation.
>
> I received some initial feedback that this might be a better fit in the
> os-loganalyze project so I took a look over there. So I cloned the
> project to take a look and quickly noticed
> http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
> That makes me think it would not be a good fit there because what I'm
> looking to do relies on parsing the full file, or potentially multiple
> files, in order to get useful data.
>
> So my questions: does this seem like a good fit for os-loganalyze? If
> not is there another infra/QA project that this would be a good fit for?
> Or would people be okay with a lone project like Nova implementing this
> in tree for their own use?

My first instinct was that this could be tracked along with the Health
project but after a little thought log warnings aren't necessarily an
indication of the health of the project.

It might make sense to start this out in Nova to see how useful it
ends up being, but I can see Glance being interested in this at some
point if it ends up being useful to Nova.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-27 Thread Jeffrey Zhang
Thanks for all guys.

So we have reached an agreement now. We will mark the fedora deprecated now
( Newton ), and plan to drop it in O cycle.

On Fri, Sep 23, 2016 at 6:05 PM, Haïkel  wrote:

> 2016-09-21 16:34 GMT+02:00 Steven Dake (stdake) :
> >
> >
> >
> > On 9/20/16, 11:18 AM, "Haïkel"  wrote:
> >
> > 2016-09-19 19:40 GMT+02:00 Jeffrey Zhang :
> > > Kolla core reviewer team,
> > >
> > > Kolla supports multiple Linux distros now, including
> > >
> > > * Ubuntu
> > > * CentOS
> > > * RHEL
> > > * Fedora
> > > * Debian
> > > * OracleLinux
> > >
> > > But only Ubuntu, CentOS, and OracleLinux are widely used and we
> have
> > > robust gate to ensure the quality.
> > >
> > > For fedora, Kolla hasn't any test for it and nobody reports any bug
> > > about it( i.e. nobody use fedora as base distro image). We (kolla
> > > team) also do not have enough resources to support so many Linux
> > > distros. I prefer to deprecate fedora support now.  This is talked
> in
> > > past but inconclusive[0].
> > >
> > > Please vote:
> > >
> > > 1. Kolla needs support fedora( if so, we need some guys to set up
> the
> > > gate and fix all the issues ASAP in O cycle)
> > > 2. Kolla should deprecate fedora support
> > >
> > > [0] http://lists.openstack.org/pipermail/openstack-dev/2016-
> June/098526.html
> > >
> >
> >
> > /me has no voting rights
> >
> > As RDO maintainer and Fedora developer, I support option 2. as it'd
> be
> > very time-consuming to maintain Fedora support..
> >
> >
> > >
> > > --
> > > Regards,
> > > Jeffrey Zhang
> > > Blog: http://xcodest.me
> > >
> >
> > Haikel,
> >
> > Quck Q – are you saying maintaining fedora in kolla is time consuming or
> that maintaining rdo for fedora is time consuming (and something that is
> being dropped)?
> >
>
> Both, in my experience in maintaining RDO on Fedora, I encountered
> similar issues than Kolla. It's doable but a lot of work.
> One of the biggest problem are updates, you may have disruptive
> updates on python modules packages quite frequently, or even rarer,
> get some updates reverted.
> So keeping Fedora in a good shape would require a decent amount of efforts.
>
> Regards,
> H.
>
>
>
> > Thanks for improving clarity on this situation.
> >
> > Regards
> > -steve
> >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-27 Thread Jeffrey Zhang
Voting time is up.

Then Debian will stay in Kolla and anyone is welcome to toke over the
maintain work.
But if Debian is out of maintaining in next cycle, we will try to remove it
again. :)

On Fri, Sep 23, 2016 at 3:53 PM, Christian Berendt <
bere...@betacloud-solutions.de> wrote:

> > On 22 Sep 2016, at 17:16, Ryan Hallisey  wrote:
> >
> > I agree with Michal and Martin. I was a little reluctant to respond here
> because the Debian additions are new, while Fedora has been around since
> the beginning and never got a ton of testing.
> >
> > Berendt what's your take here?
>
> It is fine for me to keep Debian if someone committed to continue working
> on it.
>
> Christian.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] install guide has moved

2016-09-27 Thread Andreas Jaeger
On 2016-09-27 17:02, Andreas Jaeger wrote:
> On 2016-09-27 16:54, Ruby Loo wrote:
>> Hi,
>>
>>  
>>
>> Thanks to the huge efforts put in by Mathieu Mitchell (mat128) and Jay
>> Faulkner (JayF), we've moved ironic's install guide from the developer
>> documentation to the official openstack site [1]. Isn't it a beauty? :D
>>
>>  
>>
>> Please update your bookmarks to point to the new location, and help us
>> improve the install guide by providing feedback and submitting patches.
>>
>>  
>>
>> --ruby
>>
>>  
>>
>> [1] http://docs.openstack.org/project-install-guide/baremetal/draft/
> 
> Be aware that this is the draft location - the version from master, so
> this will soon be the Ocata version.
> 
> Once newton is released, docs.openstack.org will point to the Newton
> version which is published from stable/newton branch already to:
> 
> http://docs.openstack.org/project-install-guide/baremetal/newton/

Oh, the draft version is really a beauty. Can you backport that to
newton, please?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interface detach results in incorrect DHCP6 functioning on higher-index interfaces

2016-09-27 Thread Kevin Benton
Hi,

Sorry about the huge delay. Is this behavior still present? Did you file a
bug here? https://bugs.launchpad.net/neutron

Bugs reported via the mailing list tend to fall through the cracks.

Cheers,
Kevin Benton

On Tue, Mar 8, 2016 at 7:50 AM, Andrei Radulescu-Banu <
andrei.radulescu-b...@exfo.com> wrote:

> I'm using the latest Devstack installed as a standalone, and testing the
> interface detach functionality through the Horizon GUI. In my case, I have
> a special Linux image with DHCP and DHCPv6 enabled on all interfaces. Here
> is my config:
> - Two separate subnets, 'private', with DHCP enabled, and 'private6', with
> DHCP6 enabled
> - Interface eth0 on 'private', eth1 on 'private6', eth2 on 'private' and
> eth3 again on 'private6'
> - Initially, eth0 and eth2 acquire a DHCP address; eth1 and eth3 a DHCP6
> address. Note their MAC addresses in the display.
>
> [stack@paradise devstack]$ neutron net-show private
> +-+--+
> | Field   | Value|
> +-+--+
> | admin_state_up  | True |
> | availability_zone_hints |  |
> | availability_zones  | nova |
> | id  | e63dc15c-bc65-41ef-8aaf-ca047d8f208c |
> | ipv4_address_scope  |  |
> | ipv6_address_scope  |  |
> | mtu | 1450 |
> | name| private  |
> | port_security_enabled   | True |
> | router:external | False|
> | shared  | False|
> | status  | ACTIVE   |
> | subnets | 9b3df9c8-6de9-4373-a567-6b59b5312d8a |
> | tenant_id   | 2876a2eb470b4ff1a8a04c960820f317 |
> +-+--+
> [stack@paradise devstack]$ neutron net-show private6
> +-+--+
> | Field   | Value|
> +-+--+
> | admin_state_up  | True |
> | availability_zone_hints |  |
> | availability_zones  | nova |
> | id  | 67e7aa17-50e3-436a-99c9-1618683d2983 |
> | ipv4_address_scope  |  |
> | ipv6_address_scope  |  |
> | mtu | 1450 |
> | name| private6 |
> | port_security_enabled   | True |
> | router:external | False|
> | shared  | False|
> | status  | ACTIVE   |
> | subnets | a6e39a5b-7153-481c-acd0-72ac26bb6288 |
> | tenant_id   | 2876a2eb470b4ff1a8a04c960820f317 |
> +-+--+
> [stack@paradise devstack]$ neutron subnet-show private-subnet
> +---++
> | Field | Value  |
> +---++
> | allocation_pools  | {"start": "10.1.0.2", "end": "10.1.0.254"} |
> | cidr  | 10.1.0.0/24|
> | dns_nameservers   ||
> | enable_dhcp   | True   |
> | gateway_ip| 10.1.0.1   |
> | host_routes   ||
> | id| 9b3df9c8-6de9-4373-a567-6b59b5312d8a   |
> | ip_version| 4  |
> | ipv6_address_mode ||
> | ipv6_ra_mode  ||
> | name  | private-subnet |
> | network_id| e63dc15c-bc65-41ef-8aaf-ca047d8f208c   |
> | subnetpool_id ||
> | tenant_id | 2876a2eb470b4ff1a8a04c960820f317   |
> +---++
> [stack@paradise devstack]$ neutron subnet-show private-subnet6
> +---+--+
> | Field | Value|
> 

[openstack-dev] [refstack] Team meeting cancelled

2016-09-27 Thread Paul Van eck

Hello folks,

The Tuesday IRC meeting will be cancelled as Catherine will be unavailable
that day.

Thanks,

Paul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] XML Attacks and DefusedXML on Global Requirements

2016-09-27 Thread Travis McPeak
There are several attacks (https://pypi.python.org/pypi/defusedxml#id3)
that can be performed when XML is parsed from untrusted input.  DefusedXML
offers safe alternatives to XML parsing libraries but is not currently part
of global requirements.

I propose adding DefusedXML to global requirements so that projects have an
option for safe XML parsing.  Does anybody have any thoughts or objections?

Thanks,
-Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-09-27 Thread Daniel P. Berrange
On Tue, Sep 27, 2016 at 10:40:34AM -0600, Chris Friesen wrote:
> On 09/27/2016 10:17 AM, Matthew Booth wrote:
> 
> > I think we should be able to create a domain, but once created we should 
> > never
> > redefine a domain. We can do adding and removing devices dynamically using
> > libvirt's apis, secure in the knowledge that libvirt will persist this for 
> > us.
> > When we upgrade the host, libvirt can ensure we don't break guests which 
> > are on
> > it. Evacuate should be pretty much the only reason to start again.
> 
> Sounds interesting.  How would you handle live migration?
> 
> Currently we regenerate the XML file on the destination from the nova DB.  I
> guess in your proposal we'd need some way of copying the XML file from the
> source to the dest, and then modifying the appropriate segments to adjust
> things like CPU/NUMA pinning?

Use the flag VIR_MIGRATE_PERSIST_XML and libvirt will write out the
new persistent XML on the target host automatically.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] [requirements] XML Attacks and DefusedXML on Global Requirements

2016-09-27 Thread Jeremy Stanley
On 2016-09-27 10:24:02 -0700 (-0700), Travis McPeak wrote:
> There are several attacks (https://pypi.python.org/pypi/defusedxml#id3)
> that can be performed when XML is parsed from untrusted input.  DefusedXML
> offers safe alternatives to XML parsing libraries but is not currently part
> of global requirements.
> 
> I propose adding DefusedXML to global requirements so that projects have an
> option for safe XML parsing.  Does anybody have any thoughts or objections?

An addition to global requirements is generally accompanied by
direct use in at least one project getting requirements
synchronization. We have semi-regular efforts to find and "clean up"
requirements which are not used by any projects, to keep the list
to as sane a length as is reasonably possible and reduce its
testing/tracking surface area.

Getting defusedxml implemented by at least one project in the
projects.txt file of the requirements repo would be a good idea both
as a demonstration that it's a viable tool and also as a precaution
against its later removal due to lack of use.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] XML Attacks and DefusedXML on Global Requirements

2016-09-27 Thread Davanum Srinivas
We already debated this in https://review.openstack.org/#/c/311857/

All the lessons learned from DefusedXML was already incorporated in
various python packages. You can test this theory out by using the
test xml(s) in DefusedXML if you wish.

Also note that there have been no changes to the source code since
2013 (https://bitbucket.org/tiran/defusedxml/commits/branch/default)

Thanks,
Dims

On Tue, Sep 27, 2016 at 1:24 PM, Travis McPeak  wrote:
> There are several attacks (https://pypi.python.org/pypi/defusedxml#id3) that
> can be performed when XML is parsed from untrusted input.  DefusedXML offers
> safe alternatives to XML parsing libraries but is not currently part of
> global requirements.
>
> I propose adding DefusedXML to global requirements so that projects have an
> option for safe XML parsing.  Does anybody have any thoughts or objections?
>
> Thanks,
> -Travis
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Assaf Muller
On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi milan,
>
> we have measured the test coverage for OpenStack components with
> coverage.py tool [1]. It is very easy tool and it allows measure the
> coverage by lines of code and etc. (several metrics are available).
>
> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>

coverage also supports aggregating results from multiple runs, so you can
measure results from combinations such as:

1) Unit tests
2) Functional tests
3) Integration tests
4) 1 + 2
5) 1 + 2 + 3

To my eyes 3 and 4 make the most sense. Unit and functional tests are
supposed to give you low level coverage, keeping in mind that 'functional
tests' is an overloaded term and actually means something else in every
community. Integration tests aren't about code coverage, they're about user
facing flows, so it'd be interesting to measure coverage
from integration tests, then comparing coverage coming from integration
tests, and getting the set difference between the two: That's the area that
needs more unit and functional tests.


>
> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
> jordan.pitt...@scality.com> wrote:
>
>> Hi,
>>
>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>>
>>> Dear Stackers,
>>> I'd like to gather some overview on the $Sub: is there some
>>> infrastructure in place to gather such stats? Are there any groups
>>> interested in it? Any plans to establish such infrastructure?
>>>
>> I am working on such a tool with mixed results so far. Here's my approach
>> taking let's say Nova as an example:
>>
>> 1) Print all the routes known to nova (available as a python-routes
>> object:  nova.api.openstack.compute.APIRouterV21())
>> 2) "Normalize" the Nova routes
>> 3) Take the logs produced by Tempest during a tempest run (in
>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>> 8774)
>> 4) "Normalize" the tested-by-tempest Nova routes.
>> 5) Compare the two sets of routes
>> 6) 
>> 7) Profit !!
>>
>> So the hard part is obviously the normalizing of the URLs. I am currently
>> using a tons of regex :) That's not fun.
>>
>> I'll let you guys know if I have something to show.
>>
>> I think there's real interest on the topic (it comes up every year or
>> so), but no definitive answer/tool.
>>
>> Cheers,
>> Jordan
>>
>>
>>
>>
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Timur,
> Senior QA Manager
> OpenStack Projects
> Mirantis Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] Oslo specs review weeks[Sept 27 - Oct 7]

2016-09-27 Thread Joshua Harlow

ChangBo Guo wrote:

Hi ALL,

We will release Newton in Oct 6, and we have been working hard to fix
bugs and prepare the release. I think it's good time to
review good ideas. As we discussed in the Oslo weekly meeting [1],  we
would like to proposal an event of Oslo specs review weeks(Sept27 - Oct 7)
Welecom oslo folks and others review the specs of Oslo during these two
weeks,  hope we can merge these specs[2].

I also created a etherpad link [3] to collect requirements from
consuming projects, please add item if you have any questions or ideas.

[1]http://eavesdrop.openstack.org/meetings/oslo/2016/oslo.2016-09-26-16.00.log.html
[2]https://review.openstack.org/#/q/project:openstack/oslo-specs++%28status:open++OR+status:abandoned%29
[3] https://etherpad.openstack.org/p/ocata-oslo-ideas


Thanks

ChangBo Guo(gcb)



Thanks much gcb! :)

Would we want to have a few days dedicated to spec or 
spec-like-work/reviews? That may help get some activity and work around 
them to happen (mainly the dedicated days part sometimes helps peoples 
managers give them allocated time to do this, hehe).


Thoughts?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Assaf Muller
On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller  wrote:

>
>
> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
>> Hi milan,
>>
>> we have measured the test coverage for OpenStack components with
>> coverage.py tool [1]. It is very easy tool and it allows measure the
>> coverage by lines of code and etc. (several metrics are available).
>>
>> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>>
>
> coverage also supports aggregating results from multiple runs, so you can
> measure results from combinations such as:
>
> 1) Unit tests
> 2) Functional tests
> 3) Integration tests
> 4) 1 + 2
> 5) 1 + 2 + 3
>
> To my eyes 3 and 4 make the most sense. Unit and functional tests are
> supposed to give you low level coverage, keeping in mind that 'functional
> tests' is an overloaded term and actually means something else in every
> community. Integration tests aren't about code coverage, they're about user
> facing flows, so it'd be interesting to measure coverage
> from integration tests,
>

Sorry, replace integration with unit + functional.


> then comparing coverage coming from integration tests, and getting the set
> difference between the two: That's the area that needs more unit and
> functional tests.
>

To reiterate:

Run coverage from integration tests, let this be c
Run coverage from unit and functional tests, let this be c'

Let diff = c \ c'

'diff' is where you're missing unit and functional tests coverage.


>
>
>>
>> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
>> jordan.pitt...@scality.com> wrote:
>>
>>> Hi,
>>>
>>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k 
>>> wrote:
>>>
 Dear Stackers,
 I'd like to gather some overview on the $Sub: is there some
 infrastructure in place to gather such stats? Are there any groups
 interested in it? Any plans to establish such infrastructure?

>>> I am working on such a tool with mixed results so far. Here's my
>>> approach taking let's say Nova as an example:
>>>
>>> 1) Print all the routes known to nova (available as a python-routes
>>> object:  nova.api.openstack.compute.APIRouterV21())
>>> 2) "Normalize" the Nova routes
>>> 3) Take the logs produced by Tempest during a tempest run (in
>>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>>> 8774)
>>> 4) "Normalize" the tested-by-tempest Nova routes.
>>> 5) Compare the two sets of routes
>>> 6) 
>>> 7) Profit !!
>>>
>>> So the hard part is obviously the normalizing of the URLs. I am
>>> currently using a tons of regex :) That's not fun.
>>>
>>> I'll let you guys know if I have something to show.
>>>
>>> I think there's real interest on the topic (it comes up every year or
>>> so), but no definitive answer/tool.
>>>
>>> Cheers,
>>> Jordan
>>>
>>>
>>>
>>>
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> Timur,
>> Senior QA Manager
>> OpenStack Projects
>> Mirantis Inc
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] RFC: "next" min libvirt/qemu requirement for Pike release

2016-09-27 Thread Daniel P. Berrange
In the Newton release we increased the min required libvirt to 1.2.1
and min QEMU to 1.5.3 We did not set any "next" versions for Ocata,
so Ocata will not be changing.

I think we should consider increasing min versions in the Pike release
though to let us cut out more back-compatibility code for versions that
will be pretty obsolete by the time Pike is released.

I've put up this proposed change:

  https://review.openstack.org/#/c/377923/

Using this is as the guide:

   https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix

It proposes  min libvirt 1.2.9 and min QEMU 2.1.0 These are the versions
present in Debian Jessie.

Out of the major distros currently supported by Ocata, this would eliminate
support for the following in Pike:

  - Ubuntu Trusty. Workaround: enable the "Cloud Archive" the addon
repository, or upgrade to Ubuntu Xenial
  - SLES 12. Workaround: upgrade to 12SP1
  - RHEL 7.1. Workaround: upgrade to 7.2 or newer

There is one extra complication in that alot of upstream CI jobs currently
use Trusty VMs, although things are increasingly migrating to Xenial based
images. Clearly if we drop Trusty support in Nova for Pike, then the CI jobs
for Nova have to be fully migrated to Xenail by that time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-27 Thread Zane Bitter

On 27/09/16 15:11, Jiahao Liang wrote:

Hello all,

I am trying to use heat to launch lb resources with Octavia as backend.
The template I used is
from 
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml.

Following are a few observations:

1. Even though Listener was created with ERROR status, heat will still
go ahead and mark it Creation Complete. As in the heat code, it only
check whether root Loadbalancer status is change from PENDING_UPDATE to
ACTIVE. And Loadbalancer status will be changed to ACTIVE anyway no
matter Listener's status.


That sounds like a clear bug.


2. As heat engine wouldn't know the Listener's creation failure, it will
continue to create Pool\Member\Heatthmonitor on top of an Listener which
actually doesn't exist. It causes a few undefined behaviors.  As a
result, those LBaaS resources in ERROR state are unable to be cleaned up
with either normal neutron or heat api.


Is this a bug regarding LBaaS V2 for heat, or is it designed that way on
purpose?  In my opinion, it would be more natural if heat reports
CREATION_FAILURE if any of the LBaaS resources fails.

Thanks,
Jiahao Liang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] dhcp 'Address already in use' errors when trying to start a dnsmasq

2016-09-27 Thread Ihar Hrachyshka

Kevin Benton  wrote:

There is no side effect other than log noise and a delayed reload? I  
don't see why a revert would be appropriate.


I looked at the logs and the issue seems to be that the process isn't  
tracked correctly the first time it starts.


grep for the following:

ea141299-ce07-4ff7-9a03-7a1b7a75a371', 'dnsmasq'

in
http://logs.openstack.org/26/377626/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b6953d4/logs/screen-q-dhcp.txt.gz

The first time dnsmasq is called it gives a 0 return code but the agent  
doesn't seem to get a pid for it. So the next time it is called it  
conflicts with the running proc.


Id you mean those log messages:

2016-09-27 12:21:24.760
 13751 DEBUG neutron.agent.linux.utils 
[req-128c3e79-151a-4f57-9dbc-053ff0999679 - -] Unable to access 
/opt/stack/data/neutron/external/pids/ea141299-ce07-4ff7-9a03-7a1b7a75a371.pid 
get_value_from_file /opt/stack/new/neutron/neutron/agent/linux/utils.py:204

2016-09-27 12:21:24.760
 13751 DEBUG neutron.agent.linux.utils 
[req-128c3e79-151a-4f57-9dbc-053ff0999679 - -] Unable to access 
/opt/stack/data/neutron/external/pids/ea141299-ce07-4ff7-9a03-7a1b7a75a371.pid 
get_value_from_file /opt/stack/new/neutron/neutron/agent/linux/utils.py:204

2016-09-27 12:21:24.761
 13751 DEBUG neutron.agent.linux.external_process 
[req-128c3e79-151a-4f57-9dbc-053ff0999679 - -] No process started for 
ea141299-ce07-4ff7-9a03-7a1b7a75a371 disable 
/opt/stack/new/neutron/neutron/agent/linux/external_process.py:123

then I don’t think that’s correct interpretation of the log messages.  
Notice that the pid file names there are not in dnsmasq network dir, but in  
external/.pid. Those pid files are not dnsmasq ones but potentially  
belong to metadata proxies managed by the agent. The agent attempts to  
disable proxy because it’s not needed (as per logic in  
configure_dhcp_for_network). Since the network does not have a proxy  
process running, it can’t find the pid file and hence cannot disable the  
proxy process. Then it completes configuration process.


It should not influence the flow of the program.

To prove that dnsmasq is properly tracked, also see that later when we  
restart the process for the network, we correctly extract PID from the file  
and use it for kill -9 call:


http://logs.openstack.org/26/377626/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b6953d4/logs/screen-q-dhcp.txt.gz#_2016-09-27_12_21_24_878

You can check for yourself that the same PID was actually used by the  
dnsmasq process started the first time. It’s logged in syslog.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-loganalyze, project log parsing, or ...

2016-09-27 Thread Andrew Laski


On Tue, Sep 27, 2016, at 02:40 PM, Matthew Treinish wrote:
> On Tue, Sep 27, 2016 at 01:03:35PM -0400, Andrew Laski wrote:
> > 
> > 
> > On Tue, Sep 27, 2016, at 12:39 PM, Matthew Treinish wrote:
> > > On Tue, Sep 27, 2016 at 11:36:07AM -0400, Andrew Laski wrote:
> > > > Hello all,
> > > > 
> > > > Recently I noticed that people would look at logs from a Zuul née
> > > > Jenkins CI run and comment something like "there seem to be more
> > > > warnings in here than usual." And so I thought it might be nice to
> > > > quantify that sort of thing so we didn't have to rely on gut feelings.
> > > > 
> > > > So I threw together https://review.openstack.org/#/c/376531 which is a
> > > > script that lives in the Nova tree, gets called from a devstack-gate
> > > > post_test_hook, and outputs an n-stats.json file which can be seen at
> > > > http://logs.openstack.org/06/375106/8/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/e103612/logs/n-stats.json.
> > > > This provides just a simple way to compare two runs and spot large
> > > > changes between them. Perhaps later things could get fancy and these
> > > > stats could be tracked over time. I am also interested in adding stats
> > > > for things that are a bit project specific like how long (max, min, med)
> > > > it took to boot an instance, or what's probably better to track is how
> > > > many operations that took for some definition of an operation.
> > > > 
> > > > I received some initial feedback that this might be a better fit in the
> > > > os-loganalyze project so I took a look over there. So I cloned the
> > > > project to take a look and quickly noticed
> > > > http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst#n13.
> > > > That makes me think it would not be a good fit there because what I'm
> > > > looking to do relies on parsing the full file, or potentially multiple
> > > > files, in order to get useful data.
> > > > 
> > > > So my questions: does this seem like a good fit for os-loganalyze? If
> > > > not is there another infra/QA project that this would be a good fit for?
> > > > Or would people be okay with a lone project like Nova implementing this
> > > > in tree for their own use?
> > > > 
> > > 
> > > I think having this in os-loganalyze makes sense since we use that for
> > > visualizing the logs already. It also means we get it for free on all the
> > > log
> > > files. But, if it's not a good fit for a technical reason then I think
> > > creating
> > > another small tool under QA or infra would be a good path forward. Since
> > > there
> > > really isn't anything nova specific in that.
> > 
> > There's nothing Nova specific atm because I went for low hanging fruit.
> > But if the plan is to have Nova specific, Cinder specific, Glance
> > specific, etc... things in there do people still feel that a QA/infra
> > tool is the right path forward. That's my only hesitation here.
> 
> Well I think that raises more questions, what do you envision the nova
> specific
> bits would be. The only thing I could see would be something that looks
> for
> specific log messages or patterns in the logs. Which feels like exactly
> what
> elastic-recheck does?

I'm thinking beyond single line things. An example could be a parser
that can calculate the timing between the first log message seen for a
request-id and the last, or could count the number of log lines
associated with each instance boot perhaps even broken down by log
level. Things that require both an understanding of how to correlate
groups of log lines with specific events(instance boot), and being able
to calculate stats for groups of log lines(debug log line count by
request-id).

I have only a rudimentary familiarity with elastic-recheck but my
understanding is that doing anything that looks at multiple lines like
that is either complex or not really possible.


> 
> I definitely can see the value in having machine parsable log stats in
> our
> artifacts, but I'm not sure where project specific pieces would come
> from. But,
> given that hypothetical I would say as long as you made those pieces
> configurable (like a yaml syntax to search for patterns by log file or
> something) and kept a generic framework/tooling for parsing the log stats
> I
> think it's still a good fit for a QA or Infra project. Especially if you
> think
> whatever pattern you're planning to use is something other projects would
> want
> to reuse.

My concern here is that I want to go beyond simple pattern matching. I
want to be able to maintain state while parsing to associate log lines
with events that came before. The project specific bits I envision are
the logic to handle that, but I don't think yaml is expressive enough
for it. I came up with a quick example at
http://paste.openstack.org/show/583160/ . That's Nova specific and
beyond my capability to express in yaml or elastic-recheck.

-Andrew

> 
> -Matt Treinish
> 
> 
> > 
> > > 
> > > I would caution against doing it as a 

[openstack-dev] [QA] Request for design session ideas of Barcelona Summit

2016-09-27 Thread Ken'ichi Ohmichi
Hi,

We have a Design Summit next month, and now we are trying to get ideas
for QA sessions.
There is an etherpad for ideas and it is good if writing your ideas on that:

https://etherpad.openstack.org/p/ocata-qa-summit-topics

After getting ideas, we will arrange them into available slots for QA sessions.
Thanks in advance and see you in Barcelona :-)

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Mirror issues with Intel NFV CI?

2016-09-27 Thread Znoinski, Waldemar
Hi Matt,

Introduction of using subnetpools in devstack [1] is causing issues like 
'Connection to proxy timed out. ' in our setup. We are working on it and 
will update ML soon. Thanks for pinging.

[1] https://review.openstack.org/#/c/356026/


 >-Original Message-
 >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 >Sent: Tuesday, September 27, 2016 2:34 PM
 >To: OpenStack Development Mailing List (not for usage questions)
 >
 >Subject: [openstack-dev] [nova] Mirror issues with Intel NFV CI?
 >
 >I'm seeing a pretty high failure rate with some of the Intel NFV CI jobs 
 >today,
 >the pattern looks like a pypi mirror issue getting packages to setup tempest:
 >
 >http://intel-openstack-ci-logs.ovh/94/375894/2/check/tempest-dsvm-intel-
 >nfv-xenial/a0bffb3/logs/devstacklog.txt.gz
 >
 >2016-09-27 02:11:52.127 | Collecting hacking<0.12,>=0.11.0 (from -r
 >/opt/stack/new/tempest/test-requirements.txt (line 4))
 >2016-09-27 02:12:07.144 |   Retrying (Retry(total=4, connect=None,
 >read=None, redirect=None)) after connection broken by
 >'ConnectTimeoutError(erifiedHTTPSConnection
 >object at 0x7faca5b7fd10>, 'Connection to proxy.ir.intel.com timed out.
 >(connect timeout=15)')': /simple/hacking/
 >2016-09-27 02:12:22.654 |   Retrying (Retry(total=3, connect=None,
 >read=None, redirect=None)) after connection broken by
 >'ConnectTimeoutError(erifiedHTTPSConnection
 >object at 0x7faca5b7fe10>, 'Connection to proxy.ir.intel.com timed out.
 >(connect timeout=15)')': /simple/hacking/
 >2016-09-27 02:12:38.657 |   Retrying (Retry(total=2, connect=None,
 >read=None, redirect=None)) after connection broken by
 >'ConnectTimeoutError(erifiedHTTPSConnection
 >object at 0x7faca5b7ff10>, 'Connection to proxy.ir.intel.com timed out.
 >(connect timeout=15)')': /simple/hacking/
 >2016-09-27 02:12:55.674 |   Retrying (Retry(total=1, connect=None,
 >read=None, redirect=None)) after connection broken by
 >'ConnectTimeoutError(erifiedHTTPSConnection
 >object at 0x7faca59e9050>, 'Connection to proxy.ir.intel.com timed out.
 >(connect timeout=15)')': /simple/hacking/
 >2016-09-27 02:13:14.682 |   Retrying (Retry(total=0, connect=None,
 >read=None, redirect=None)) after connection broken by
 >'ConnectTimeoutError(erifiedHTTPSConnection
 >object at 0x7faca59e9150>, 'Connection to proxy.ir.intel.com timed out.
 >(connect timeout=15)')': /simple/hacking/
 >2016-09-27 02:13:29.687 |   Could not find a version that satisfies the
 >requirement hacking<0.12,>=0.11.0 (from -r /opt/stack/new/tempest/test-
 >requirements.txt (line 4)) (from versions: )
 >2016-09-27 02:13:29.687 | No matching distribution found for
 >hacking<0.12,>=0.11.0 (from -r
 >/opt/stack/new/tempest/test-requirements.txt (line 4))
 >
 >Is this a known issue that the CI maintainers are fixing?
 >
 >--
 >
 >Thanks,
 >
 >Matt Riedemann
 >
 >
 >__
 >
 >OpenStack Development Mailing List (not for usage questions)
 >Unsubscribe: OpenStack-dev-
 >requ...@lists.openstack.org?subject:unsubscribe
 >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-27 Thread Flavio Percoco

On 27/09/16 00:41 +, Fox, Kevin M wrote:

I think some of the disconnect here is a potential misunderstanding about what 
kolla-kubernetes is

Ultimately, to me, kolla-kubernetes is a database of architecture bits to 
successfully deploy and manage OpenStack on k8s. Its building blocks. Pretty 
much what you asked for.

There are a bunch of ways of building openstacks. There is no one true way. It 
really depends on what the operator wants the cloud to do. Is a daemonset or a 
petset the best way to deploy a cinder volume pod in k8s? The answer is, it 
depends. (We have an example where one or the other is better now)

kolla-kubernetes is taking the building block approach. It takes a bit of 
information in from the operator or other tool, along with their main openstack 
configs, and generates k8s templates that are optimized for that case.

Who builds the configs, who tells it when to build what templates, and in what 
order they are started is a separate thing.

You should be able to do a 'kollakube template pod nova-api' and just see what 
it thinks is best.

If you want a nice set of documents, it should be easy to loop across them and 
dump them to html.

I think doing them in a machine readable way rather then a document makes much 
more sense, as it can be reused in multiple projects such as tripleo, fuel, and 
others and we all can share a common database. We're trying to build a 
community around this database.

Asking to basically make a new project, that does just a human only readable 
version of the same database seems like a lot of work, with many fewer useful 
outcomes.


I just want to point out that I'm not asking anyone to make a new project and
that my intention is to collect info from other projects too, not just
kolla-kubernetes. This is a pure documentation effort. I understand you don't
think this is useful and I appreciate your feedback.

Flavio


Please help the community make a great machine and human readable reference 
architecture system by contributing to the kolla-kubernetes project. There are 
plenty of opportunity to help out.

Maybe making some tools to make the data contained in the database more human 
friendly would suit your interests? Maybe a nice web frontend that asks a few 
questions and renders templates out in nice human friendly ways?

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Monday, September 26, 2016 9:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to 
deploy OpenStack on k8s

On 23/09/16 17:47 +, Steven Dake (stdake) wrote:

Flavio,

Forgive the top post and lack of responding inline – I am dealing with lookout 
2016 which apparently has a bug here [0].

Your question:

I can contribute to kolla-kubernetes all you want but that won't give me what I
asked for in my original email and I'm pretty sure there are opinions about the
"recommended" way for running OpenStack on kubernetes. Questions like: Should I
run rabbit in a container? Should I put my database in there too? Now with
PetSets it might be possible. Can we be smarter on how we place the services in
the cluster? Or should we go with the traditional controller/compute/storage
architecture.

You may argue that I should just read the yaml files from kolla-kubernetes and
start from there. May be true but that's why I asked if there was something
written already.
Your question ^

My answer:
I think what you are really after is why kolla-kubernetes has made the choices 
we have made.  I would not argue that reading the code would answer that 
question because it does not.  Instead it answers how those choices were 
implemented.

You are mistaken in thinking that contributing to kolla-kubernetes won’t give 
you what you really want.  Participation in the Kolla community will answer for 
you *why* choices were made as they were.  Many choices are left unanswered as 
of yet and Red Hat can make a big impact in the future of the decision making 
about *why*.  You have to participate to have your voice heard.  If you are 
expecting the Kolla team to write a bunch of documentation to explain *why* we 
have made the choices we have, we frankly don’t have time for that.  Ryan and 
Michal may promise it with architecture diagrams and other forms of incomplete 
documentation, but that won’t provide you a holistic view of *why* and is 
wasted efforts on their part (no offense Michal and Ryan – I think it’s a 
worthy goal.  The timing for such a request is terrible and I don’t want to 
derail the team into endless discussions about the best way to do things).

The best way to do things is sorted out via the gerrit review process using the 
standard OpenStack workflow through an open development process.


Steve,

Thanks for getting back on this. Unfortunatelly, I think you keep missing my
point and my goal.

I'd like to document the architectural 

[openstack-dev] [karbor] [smaug] Weekly Meeting cancelled

2016-09-27 Thread xiangxinyong
Hello guys,

The Karbor today's IRC meeting 09:00(UTC) will be cancelled.


Welcome to join #openstack-karbor.


Thanks very much.


Best Regards,
xiangxinyong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-27 Thread Flavio Percoco

On 23/09/16 12:33 -0400, Ryan Hallisey wrote:

Thanks for starting the discussion Fabio.


As someone that started looking into this topic just recently, I'd love to see
our communities collaborate more wherever possible. For example, it'd be great
to see us working on a reference architecture for deploying OpenStack on
kubernetes, letting the implementation details aside for a bit. I'd assume some
folks have done this already and I bet we can all learn more from it if we work
on this together.


Agreed Flavio. Members of the kolla-kubernetes community have some ideas of
how this will look.  I can put together some diagrams over the weekend to depict
this and maybe others that have some ideas can comment and share theirs.


Sounds awesome! Thanks a bunch :)


So, let me go ahead and ask some further questions here, I might be missing some
history and/or context:
- Is there any public documentation that acts as a reference architecture for
 deploying OpenStack on kubernetes?


These specs [1][2] might be a good start.


I'll go through these, thanks.


- Is this something the architecture working group could help with? Or would it
 be better to hijack one of kolla meetings?


kolla-kubernetes has a booked slot in the weekly kolla meetings. This could be
discussed there.


++


So issue is, I know of few other openstacks on k8s and everyone does
that slightly differently. So far we lack proof points and real world
data to determine best approaches to stuff. This is still not-to-well
researched field. Right now it's mostly opinions and assumptions.
We're not ready to make document without having a flame war around
it;) Not enough knowledge in our collective brains.



Awesome input, thanks.


Michal is right, there are a bunch of implementations that exist. The tricky
part is pulling together all the groups to figure out the best solution.

When the kolla-kubernetes project was created, my hope that this new repo would
be a place where anyone curious about the OpenStack and Kubernetes interaction
could come and express their opinion in code or conversation. The community 
still
remains open to any changes with it's implementation and the current
implementation is a reflection of who is participating.

I agree that it would be ideal for a single place to collaborate. It would be
awesome to bring together the community that is looking to solve this
problem around a single project. Doesn't matter what that project is, but I'd
like for more collaboration :).


As for Kolla-k8s we are still deep in development, so we are free to
take best course of action we know of. We don't have any technical
debt now. Current state of stuff represents what we thing is best
approach.



I wonder if we can start writing these assumptions down and update them as we
go. I don't expect you to do it, I'm happy to help with this. We could put it in
kolla-k8s docs if that makes sense to other kolla-k8s folks.


It's not that Kolla-k8s has tech debt, but rather the community is still 
testing the
waters with its implementation. For instance, the community is looking at a 
workflow
that will execute the deployment of OpenStack and hand off to Kubernetes to 
manage it.
This solution raises some questions: why do you need a workflow at all? Why not
use Kubernetes, a Container Orchestration Engine, to orchestrate the services?  
A lot
of these fundamental questions were outlined in this spec [1] and the answers 
to them
are still WIP [3].


Indeed! This and other fundamental questions are the ones I'd like us to answer
and document, perhaps as new things happen. I'll read [3] too. Thanks for the
pointer.


I'll probably start pinging you guys on IRC with questions so I can help writing
this down.


That would be fantastic! There's also room for collaboration at summit too.
Kolla-kubernetes will have a design session/fishbowl scheduled.


Awesome! I'll be there for sure :)


There is also part that k8s is constantly growing and it lacks certain
features that created these issues in the first place, if k8s solves
them on their side, that will affect decision on our side.



Thanks a lot, Michal. This is indeed the kind of info I was looking for and
where I'd love to start from.


Agreed Michal.  The community has been adapting on the fly based on features 
coming
out of Kubernetes.  Things like init containers and petsets were recent features
that have found their way into kolla-kubernetes.

The flow of work in kolla-kubernetes has been following the work items in the
spec [1], but in a different order.  The basic outline for putting OpenStack on
Kubernetes will follow a similar path. Where as things like the templates will
be similar, but the orchestration method can vary. I think that's where the
biggest controversy lies.



Thanks a lot for all your comments, Ryan. This is useful content and I'll go
through it and ask questions there and/or on IRC.

Flavio


Thanks!
-Ryan

[1] - https://review.openstack.org/#/c/304182/
[2] - 

[openstack-dev] [openstack-de] [DNSaaS] [designate] jenkins is failing for all latest patches

2016-09-27 Thread Kelam, Koteswara Rao
Jenkins for openstack/designate is failing for latest patches. Py27, py34 and 
py35 are failing continuously.
gate-designate-python27-db-ubuntu-xenial

FAILURE in 3m 58s

gate-designate-python34-db

FAILURE in 5m 16s

gate-designate-python35-db

FAILURE in 3m 41s


Recent patches:
https://review.openstack.org/#/c/376436/
https://review.openstack.org/#/c/377050/
https://review.openstack.org/#/c/376170/
etc

Regards,
Koteswara

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cross Project Summit Sessions Planning

2016-09-27 Thread Thierry Carrez
John Garbutt wrote:
> [...]
> In Barcelona, between Tuesday 3:55pm (Oct 25) and Wednesday 2:55pm
> (Oct 26) we have some dedicated time to discuss and resolve some of
> the issues that span across our OpenStack Community.
> 
> As before, we will be doing proposals for this via etherpad.
> Please propose items into here:
> https://etherpad.openstack.org/p/ocata-cross-project-sessions
> 
> Session ideas will be open until October 1st, after which point the TC
> will do selection and scheduling.
> [...]

Quick reminder, last week to suggest topics for the cross-project
workshop slots in Barcelona.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] XML Attacks and DefusedXML on Global Requirements

2016-09-27 Thread Sean Dague
On 09/27/2016 01:24 PM, Travis McPeak wrote:
> There are several attacks (https://pypi.python.org/pypi/defusedxml#id3)
> that can be performed when XML is parsed from untrusted input. 
> DefusedXML offers safe alternatives to XML parsing libraries but is not
> currently part of global requirements. 
> 
> I propose adding DefusedXML to global requirements so that projects have
> an option for safe XML parsing.  Does anybody have any thoughts or
> objections?

Out of curiosity, are there specific areas of concern in existing
projects here? Most projects have dropped XML API support.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] dhcp 'Address already in use' errors when trying to start a dnsmasq

2016-09-27 Thread Ihar Hrachyshka

Hi all,

so we started getting ‘Address already in use’ when trying to start dnsmasq  
after the previous instance of the process is killed with kill -9. Armando  
spotted it today in logs for: https://review.openstack.org/#/c/377626/ but  
as per logstash it seems like an error we saw before (the earliest I see is  
9/20), f.e.:


http://logs.openstack.org/26/377626/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b6953d4/logs/screen-q-dhcp.txt.gz

Assuming I understand the flow of the failure, it runs as follows:

- sync_state starts dnsmasq per network;
- after agent lock is freed, some other notification event  
(port_update/subnet_update/...) triggers restart for one of the processes;
- the restart is done not via reload_allocations (-SIGHUP) but thru  
restart/disable (kill -9);
- once the old dnsmasq is killed with -9, we attempt to start a new process  
with new config files generated and fail with: “dnsmasq: failed to create  
listening socket for 10.1.15.242: Address already in use”
- surprisingly, after several failing attempts to start the process, it  
succeeds to start it after a bunch of seconds and runs fine.


It looks like once we kill the process with -9, it may hold for the socket  
resource for some time and may clash with the new process we try to spawn.  
It’s a bit weird because dnsmasq should have set REUSEADDR for the socket,  
so a new process should have started just fine.


Lately, we landed several patches that touched reload logic for DHCP agent  
on notifications. Among those suspicious in the context are:


- https://review.openstack.org/#/c/372595/ - note it requests ‘disable’  
(-9) where it was using ‘reload_allocations’ (-SIGHUP) before, and it also  
does not unplug the port on lease release (maybe after we rip of the  
device, the address clash with the old dnsmasq state is gone even though  
the ’new’ port will use the same address?).
- https://review.openstack.org/#/c/372236/6 - we were requesting  
reload_allocations in some cases before, and now we put the network into  
resync queue


There were other related changes lately, you can check history of Kevin’s  
changes for the branch, it should capture most of them.


I wonder whether we hit some long standing restart issue with dnsmasq here  
that was just never triggered before because we were not calling kill -9 so  
eagerly as we do now.


Note: Jakub Libosvar validated that 'kill -9 && dnsmasq’ in loop does NOT  
result in the failure we see in gate logs.


We need to understand what’s going with the failure, and come up with some  
plan for Newton. We either revert suspected patches as I believe Armando  
proposed before, but then it’s not clear until which point to do it; or we  
come up with some smart fix for that, that I don’t immediately grasp.


I will be on vacation tomorrow, though I will check the email thread to see  
if we have a plan to act on. I really hope folks give the issue a priority  
since it seems like we buried ourselves under a pile of interleaved patches  
and now we don’t have a clear view of how to get out of the pile.


Cheers,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] XML Attacks and DefusedXML on Global Requirements

2016-09-27 Thread Dave Walker
On 27 September 2016 at 19:19, Sean Dague  wrote:

> On 09/27/2016 01:24 PM, Travis McPeak wrote:
> > There are several attacks (https://pypi.python.org/pypi/defusedxml#id3)
> > that can be performed when XML is parsed from untrusted input.
> > DefusedXML offers safe alternatives to XML parsing libraries but is not
> > currently part of global requirements.
> >
> > I propose adding DefusedXML to global requirements so that projects have
> > an option for safe XML parsing.  Does anybody have any thoughts or
> > objections?
>
> Out of curiosity, are there specific areas of concern in existing
> projects here? Most projects have dropped XML API support.
>
>
Outbound XML datasources which are parsed still used with at least nova
vmware support and multiple cinder drivers.

openstack/ec2-api is still providing an xml api service?

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] dhcp 'Address already in use' errors when trying to start a dnsmasq

2016-09-27 Thread Miguel Angel Ajo Pelayo
Ack, and thanks for the summary Ihar,

I will have a look on it tomorrow morning, please update this thread
with any progress.



On Tue, Sep 27, 2016 at 8:22 PM, Ihar Hrachyshka  wrote:
> Hi all,
>
> so we started getting ‘Address already in use’ when trying to start dnsmasq
> after the previous instance of the process is killed with kill -9. Armando
> spotted it today in logs for: https://review.openstack.org/#/c/377626/ but
> as per logstash it seems like an error we saw before (the earliest I see is
> 9/20), f.e.:
>
> http://logs.openstack.org/26/377626/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b6953d4/logs/screen-q-dhcp.txt.gz
>
> Assuming I understand the flow of the failure, it runs as follows:
>
> - sync_state starts dnsmasq per network;
> - after agent lock is freed, some other notification event
> (port_update/subnet_update/...) triggers restart for one of the processes;
> - the restart is done not via reload_allocations (-SIGHUP) but thru
> restart/disable (kill -9);
> - once the old dnsmasq is killed with -9, we attempt to start a new process
> with new config files generated and fail with: “dnsmasq: failed to create
> listening socket for 10.1.15.242: Address already in use”
> - surprisingly, after several failing attempts to start the process, it
> succeeds to start it after a bunch of seconds and runs fine.
>
> It looks like once we kill the process with -9, it may hold for the socket
> resource for some time and may clash with the new process we try to spawn.
> It’s a bit weird because dnsmasq should have set REUSEADDR for the socket,
> so a new process should have started just fine.
>
> Lately, we landed several patches that touched reload logic for DHCP agent
> on notifications. Among those suspicious in the context are:
>
> - https://review.openstack.org/#/c/372595/ - note it requests ‘disable’ (-9)
> where it was using ‘reload_allocations’ (-SIGHUP) before, and it also does
> not unplug the port on lease release (maybe after we rip of the device, the
> address clash with the old dnsmasq state is gone even though the ’new’ port
> will use the same address?).
> - https://review.openstack.org/#/c/372236/6 - we were requesting
> reload_allocations in some cases before, and now we put the network into
> resync queue
>
> There were other related changes lately, you can check history of Kevin’s
> changes for the branch, it should capture most of them.
>
> I wonder whether we hit some long standing restart issue with dnsmasq here
> that was just never triggered before because we were not calling kill -9 so
> eagerly as we do now.
>
> Note: Jakub Libosvar validated that 'kill -9 && dnsmasq’ in loop does NOT
> result in the failure we see in gate logs.
>
> We need to understand what’s going with the failure, and come up with some
> plan for Newton. We either revert suspected patches as I believe Armando
> proposed before, but then it’s not clear until which point to do it; or we
> come up with some smart fix for that, that I don’t immediately grasp.
>
> I will be on vacation tomorrow, though I will check the email thread to see
> if we have a plan to act on. I really hope folks give the issue a priority
> since it seems like we buried ourselves under a pile of interleaved patches
> and now we don’t have a clear view of how to get out of the pile.
>
> Cheers,
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] XML Attacks and DefusedXML on Global Requirements

2016-09-27 Thread Travis McPeak
There is a private security bug about it right now too.  No, not all XML
libraries are immune now.

On Tue, Sep 27, 2016 at 11:36 AM, Dave Walker  wrote:

>
>
> On 27 September 2016 at 19:19, Sean Dague  wrote:
>
>> On 09/27/2016 01:24 PM, Travis McPeak wrote:
>> > There are several attacks (https://pypi.python.org/pypi/defusedxml#id3)
>> > that can be performed when XML is parsed from untrusted input.
>> > DefusedXML offers safe alternatives to XML parsing libraries but is not
>> > currently part of global requirements.
>> >
>> > I propose adding DefusedXML to global requirements so that projects have
>> > an option for safe XML parsing.  Does anybody have any thoughts or
>> > objections?
>>
>> Out of curiosity, are there specific areas of concern in existing
>> projects here? Most projects have dropped XML API support.
>>
>>
> Outbound XML datasources which are parsed still used with at least nova
> vmware support and multiple cinder drivers.
>
> openstack/ec2-api is still providing an xml api service?
>
> --
> Kind Regards,
> Dave Walker
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC nomination

2016-09-27 Thread Doug Hellmann
I am announcing my candidacy for a position on the OpenStack Technical
Committee.

I have served on the Technical Committee for the last three years and
as PTL of the Release Management team for the Mitaka and Newton
cycles.  I will be PTL for the Release Management team again for
Ocata.

Before the Newton cycle I worked on a wide variety of projects within
the community, including Ceilometer, and python-openstackclient. I am
an Oslo team member, and served as PTL for the Icehouse, Juno, and
Kilo cycles.  I am also part of the team working on the Python 3
transition, and have contributed to several of the infrastructure
projects. In addition to my technical contributions, I helped to found
and still help to organize the OpenStack meetup group in Atlanta,
Georgia.

I started contributing to OpenStack in 2012, not long after joining
Dreamhost, and I am currently employed by Red Hat to work on OpenStack
with a focus on long-term project concerns.

Most of my work on OpenStack has been focused on enabling others in
the community. From the Oslo library hierarchy, to establishing the
team liaison system, to reno, to release automation, I have worked on
tools, processes, and patterns to make incremental improvements in our
ability to build OpenStack. I view serving on the TC as an extension
of that work.

My experience has led me to develop a perspective of OpenStack that is
strongly focused on cross-project concerns, and to reinforce for me
the importance of communication between project teams to smooth out
the integration points and remove friction, all key responsibilities
of the Technical Committee.

During Ocata we will be working on the first iteration of the new
community-wide goals process [1], defined during Newton. This
initiative is intended to increase the visibility of important
community-wide needs and encourage project teams to incorporate them
into their priority discussions. We have bootstrapped the process by
identifying several potential goals related to lowering technical
debt, one of which was approved for Ocata as a trial run [2].  I would
like to continue to serve on the TC to finish launching this new
initiative because this is the first time we have attempted anything
like this, and I expect us to find issues with the process and to need
to adjust it to incorporate feedback.

Having the TC identify community goals is an important step for us to
take.  It relies on a view that OpenStack *is* one community, with
shared values and a commitment to collaborate. This view is not
universally held among all of our contributors, and I find that
unfortunate. I believe that shared values, collaboration, and
consistency are compatible with innovation, experimentation, and
finding creative solutions to challenging problems. The diversity of
the projects that already exist in our big tent shows this to be true.

The fact that we are still debating the extent of our unity tells me
that it is important to document our community principles. As we have
grown, new community members have joined with an incomplete
understanding of our history. Even some folks who have been around for
a long time do not have the whole picture, or disagree with decisions
made early on.  We have started writing down some of the assumptions
we have in mind when we discuss topics within the TC [3], in order to
come to a shared understanding of where we all (not just TC members)
think we want to be going and how to get there. Without that shared
understanding for context, some of the TC's decisions may seem to not
make sense, which also means we are doing a poor job of communicating
outside of the TC with the rest of the community. I would like to
continue to serve on the TC as we work on those communication issues
and resolve the questions about our shared principles.

The OpenStack community is the most exciting and welcoming group I
have interacted with in more than 20 years of contributing to open
source projects. I look forward to continuing to be a part of the
community and serving the project.

Thank you,
Doug

Review history: https://review.openstack.org/#/q/reviewer:2472,n,z
Commit history: https://review.openstack.org/#/q/owner:2472,n,z
Stackalytics: http://stackalytics.com/?user_id=doug-hellmann
Foundation Profile: http://www.openstack.org/community/members/profile/359
Freenode: dhellmann
Website: https://doughellmann.com

[1] http://governance.openstack.org/goals/index.html
[2] http://governance.openstack.org/goals/ocata/remove-incubated-oslo-code.html
[3] https://review.openstack.org/#/c/357260/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] XML Attacks and DefusedXML on Global Requirements

2016-09-27 Thread Jeremy Stanley
On 2016-09-27 11:45:14 -0700 (-0700), Travis McPeak wrote:
> There is a private security bug about it right now too.  No, not all XML
> libraries are immune now.

https://launchpad.net/bugs/1625402 which I've just now declassified.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >