Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-29 Thread Eugene Nikanorov
Bug 1294603 has the root cause in LBaaS, which should be fixed by
https://review.openstack.org/#/c/81537/

Thanks,
Eugene.


On Fri, Mar 28, 2014 at 7:29 PM, Matt Riedemann
wrote:

>
>
> On 3/27/2014 8:00 AM, Salvatore Orlando wrote:
>
>>
>> On 26 March 2014 19:19, James E. Blair > > wrote:
>>
>> Salvatore Orlando mailto:sorla...@nicira.com>>
>>
>> writes:
>>
>>  > On another note, we noticed that the duplicated jobs currently
>> executed for
>>  > redundancy in neutron actually seem to point all to the same
>> build id.
>>  > I'm not sure then if we're actually executing each job twice or
>> just
>>  > duplicating lines in the jenkins report.
>>
>> Thanks for catching that, and I'm sorry that didn't work right.  Zuul
>> is
>> in fact running the jobs twice, but it is only looking at one of them
>> when sending reports and (more importantly) decided whether the change
>> has succeeded or failed.  Fixing this is possible, of course, but
>> turns
>> out to be a rather complicated change.  Since we don't make heavy use
>> of
>> this feature, I lean toward simply instantiating multiple instances of
>> identically configured jobs and invoking them (eg "neutron-pg-1",
>> "neutron-pg-2").
>>
>> Matthew Treinish has already worked up a patch to do that, and I've
>> written a patch to revert the incomplete feature from Zuul.
>>
>>
>> That makes sense to me. I think it is just a matter about the results
>> are reported to gerrit since from what I gather in logstash the jobs are
>> executed twice for each new patchset or recheck.
>>
>>
>> For the status of the full job, I gave a look at the numbers reported by
>> Rossella.
>> All the bugs are already known; some of them are not even bug; others
>> have been recently fixed (given the time span of Rossella analysis and
>> the fact it covers also non-rebased patches it might be possible to have
>> this kind of false positive).
>>
>> of all full job failures, 44% should be discarded.
>> Bug 1291611 (12%) is definitely not a neutron bug... hopefully.
>> Bug 1281969 (12%) is really too generic.
>> It bears the hallmark of bug1283522, and therefore the high number might
>> be due to the fact that trunk was plagued by this bug up to a few days
>> before the analysis.
>> However, it's worth noting that there is also another instance of "lock
>> timeout" which has caused 11 failures in full job in the past week.
>> A new bug has been filed for this issue:
>> https://bugs.launchpad.net/neutron/+bug/1298355
>> Bug 1294603 was related to a test now skipped. It is still being debated
>> whether the problem lies in test design, neutron LBaaS or neutron L3.
>>
>> The following bugs seem not to be neutron bugs:
>> 1290642, 1291920, 1252971, 1257885
>>
>> Bug 1292242 appears to have been fixed while the analysis was going on
>> Bug 1277439 instead is already known to affects neutron jobs occasionally.
>>
>> The actual state of the job is perhaps better than what the raw numbers
>> say. I would keep monitoring it, and then make it voting after the
>> Icehouse release is cut, so that we'll be able to deal with possible
>> higher failure rate in the "quiet" period of the release cycle.
>>
>>
>>
>> -Jim
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I reported this bug [1] yesterday.  This was hit in our internal Tempest
> runs on RHEL 6.5 with x86_64 and the nova libvirt driver with the neutron
> openvswitch ML2 driver.  We're running without tenant isolation on python
> 2.6 (no testr yet) so the tests are in serial.  We're running basically the
> full API/CLI/Scenarios tests though, no filtering on the smoke tag.
>
> Out of 1,971 tests run, we had 3 failures where a nova instance failed to
> spawn because networking callback events failed, i.e. neutron sends a
> server event request to nova and it's a bad URL so nova API pukes and then
> the networking request in neutron server fails.  As linked in the bug
> report I'm seeing the same neutron server log error showing up in logstash
> for community jobs but it's not 100% failure.  I haven't seen the n-api log
> error show up in logstash though.
>
> Just bringing this to people's attention in case anyone else sees it.
>
> [1] https://bugs.launchpad.net/nova/+bug/1298640
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
_

Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-28 Thread Matt Riedemann



On 3/27/2014 8:00 AM, Salvatore Orlando wrote:


On 26 March 2014 19:19, James E. Blair mailto:jebl...@openstack.org>> wrote:

Salvatore Orlando mailto:sorla...@nicira.com>>
writes:

 > On another note, we noticed that the duplicated jobs currently
executed for
 > redundancy in neutron actually seem to point all to the same
build id.
 > I'm not sure then if we're actually executing each job twice or just
 > duplicating lines in the jenkins report.

Thanks for catching that, and I'm sorry that didn't work right.  Zuul is
in fact running the jobs twice, but it is only looking at one of them
when sending reports and (more importantly) decided whether the change
has succeeded or failed.  Fixing this is possible, of course, but turns
out to be a rather complicated change.  Since we don't make heavy use of
this feature, I lean toward simply instantiating multiple instances of
identically configured jobs and invoking them (eg "neutron-pg-1",
"neutron-pg-2").

Matthew Treinish has already worked up a patch to do that, and I've
written a patch to revert the incomplete feature from Zuul.


That makes sense to me. I think it is just a matter about the results
are reported to gerrit since from what I gather in logstash the jobs are
executed twice for each new patchset or recheck.


For the status of the full job, I gave a look at the numbers reported by
Rossella.
All the bugs are already known; some of them are not even bug; others
have been recently fixed (given the time span of Rossella analysis and
the fact it covers also non-rebased patches it might be possible to have
this kind of false positive).

of all full job failures, 44% should be discarded.
Bug 1291611 (12%) is definitely not a neutron bug... hopefully.
Bug 1281969 (12%) is really too generic.
It bears the hallmark of bug1283522, and therefore the high number might
be due to the fact that trunk was plagued by this bug up to a few days
before the analysis.
However, it's worth noting that there is also another instance of "lock
timeout" which has caused 11 failures in full job in the past week.
A new bug has been filed for this issue:
https://bugs.launchpad.net/neutron/+bug/1298355
Bug 1294603 was related to a test now skipped. It is still being debated
whether the problem lies in test design, neutron LBaaS or neutron L3.

The following bugs seem not to be neutron bugs:
1290642, 1291920, 1252971, 1257885

Bug 1292242 appears to have been fixed while the analysis was going on
Bug 1277439 instead is already known to affects neutron jobs occasionally.

The actual state of the job is perhaps better than what the raw numbers
say. I would keep monitoring it, and then make it voting after the
Icehouse release is cut, so that we'll be able to deal with possible
higher failure rate in the "quiet" period of the release cycle.



-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I reported this bug [1] yesterday.  This was hit in our internal Tempest 
runs on RHEL 6.5 with x86_64 and the nova libvirt driver with the 
neutron openvswitch ML2 driver.  We're running without tenant isolation 
on python 2.6 (no testr yet) so the tests are in serial.  We're running 
basically the full API/CLI/Scenarios tests though, no filtering on the 
smoke tag.


Out of 1,971 tests run, we had 3 failures where a nova instance failed 
to spawn because networking callback events failed, i.e. neutron sends a 
server event request to nova and it's a bad URL so nova API pukes and 
then the networking request in neutron server fails.  As linked in the 
bug report I'm seeing the same neutron server log error showing up in 
logstash for community jobs but it's not 100% failure.  I haven't seen 
the n-api log error show up in logstash though.


Just bringing this to people's attention in case anyone else sees it.

[1] https://bugs.launchpad.net/nova/+bug/1298640

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-27 Thread Salvatore Orlando
On 26 March 2014 19:19, James E. Blair  wrote:

> Salvatore Orlando  writes:
>
> > On another note, we noticed that the duplicated jobs currently executed
> for
> > redundancy in neutron actually seem to point all to the same build id.
> > I'm not sure then if we're actually executing each job twice or just
> > duplicating lines in the jenkins report.
>
> Thanks for catching that, and I'm sorry that didn't work right.  Zuul is
> in fact running the jobs twice, but it is only looking at one of them
> when sending reports and (more importantly) decided whether the change
> has succeeded or failed.  Fixing this is possible, of course, but turns
> out to be a rather complicated change.  Since we don't make heavy use of
> this feature, I lean toward simply instantiating multiple instances of
> identically configured jobs and invoking them (eg "neutron-pg-1",
> "neutron-pg-2").
>
> Matthew Treinish has already worked up a patch to do that, and I've
> written a patch to revert the incomplete feature from Zuul.
>

That makes sense to me. I think it is just a matter about the results are
reported to gerrit since from what I gather in logstash the jobs are
executed twice for each new patchset or recheck.


For the status of the full job, I gave a look at the numbers reported by
Rossella.
All the bugs are already known; some of them are not even bug; others have
been recently fixed (given the time span of Rossella analysis and the fact
it covers also non-rebased patches it might be possible to have this kind
of false positive).

of all full job failures, 44% should be discarded.
Bug 1291611 (12%) is definitely not a neutron bug... hopefully.
Bug 1281969 (12%) is really too generic.
It bears the hallmark of bug1283522, and therefore the high number might be
due to the fact that trunk was plagued by this bug up to a few days before
the analysis.
However, it's worth noting that there is also another instance of "lock
timeout" which has caused 11 failures in full job in the past week.
A new bug has been filed for this issue:
https://bugs.launchpad.net/neutron/+bug/1298355
Bug 1294603 was related to a test now skipped. It is still being debated
whether the problem lies in test design, neutron LBaaS or neutron L3.

The following bugs seem not to be neutron bugs:
1290642, 1291920, 1252971, 1257885

Bug 1292242 appears to have been fixed while the analysis was going on
Bug 1277439 instead is already known to affects neutron jobs occasionally.

The actual state of the job is perhaps better than what the raw numbers
say. I would keep monitoring it, and then make it voting after the Icehouse
release is cut, so that we'll be able to deal with possible higher failure
rate in the "quiet" period of the release cycle.



> -Jim
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-26 Thread James E. Blair
Salvatore Orlando  writes:

> On another note, we noticed that the duplicated jobs currently executed for
> redundancy in neutron actually seem to point all to the same build id.
> I'm not sure then if we're actually executing each job twice or just
> duplicating lines in the jenkins report.

Thanks for catching that, and I'm sorry that didn't work right.  Zuul is
in fact running the jobs twice, but it is only looking at one of them
when sending reports and (more importantly) decided whether the change
has succeeded or failed.  Fixing this is possible, of course, but turns
out to be a rather complicated change.  Since we don't make heavy use of
this feature, I lean toward simply instantiating multiple instances of
identically configured jobs and invoking them (eg "neutron-pg-1",
"neutron-pg-2").

Matthew Treinish has already worked up a patch to do that, and I've
written a patch to revert the incomplete feature from Zuul.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-25 Thread Salvatore Orlando
Inline

Salvatore


On 24 March 2014 23:01, Matthew Treinish  wrote:

> On Mon, Mar 24, 2014 at 09:56:09PM +0100, Salvatore Orlando wrote:
> > Thanks a lot!
> >
> > We now need to get on these bugs, and define with QA an acceptable
> failure
> > rate criterion for switching the full job to voting.
> > It would be good to have a chance to only run the tests against code
> which
> > is already in master.
> > To this aim we might push a dummy patch, and keep it spinning in the
> check
> > queue.
>
> Honestly, there isn't really a number. I had a thread trying to get
> consensus on
> that back when I first made tempest run in parallel. What I ended up doing
> back
> then and what we've done since for this kind of change is to just pick a
> slower
> week for the gate and just green light it, of course after checking to
> make sure
> if it blows up we're not blocking anything critical.


Then I guess the ideal period would be after RC2s are cut.
Also, we'd need to run also a postgres flavour of the job at least.
Meaning that when calculating the probability of a patch passing in the
gate is actually the combined probability of two jobs completing
successfully.
On another note, we noticed that the duplicated jobs currently executed for
redundancy in neutron actually seem to point all to the same build id.
I'm not sure then if we're actually executing each job twice or just
duplicating lines in the jenkins report.


> If it looks like it's
> passing at roughly the same rate as everything else and you guys think it's
> ready. 25% is definitely too high, for comparison when I looked at a
> couple of
> min. ago at the numbers for the past 4 days on the equivalent job with
> nova-network it only failed 4% of the time. (12 out of 300) But that
> number does
> fluctuate quite a bit for example looking at the past week the number
> grows to
> 11.6%. (171 out of 1480)


Even with 11.6% I would not enable it.
Running mysql and pg jobs this will give us a combined success rate of
 78.1%, which pretty much means the chances of clearing successfully a
5-deep queue in the gate will be a mere 29%. My "gut" metric is that we
should achieve a degree of pass rate which allows us to clear a 10-deep
gate queue with a 50% success rate. This translates to a 3.5% failure rate
per job, which is indeed inline with what's currently observed for
nova-network.

Doing it this way doesn't seem like the best, but until it's gating things
> really don't get the attention they deserve and more bugs will just slip in
> while you wait. There will most likely be initial pain after it merges,
> but it's
> the only real way to lock it down and make forward progress.
>

> -Matt Treinish
>
> >
> >
> > On 24 March 2014 21:45, Rossella Sblendido  wrote:
> >
> > > Hello all,
> > >
> > > here is an update regarding the Neutron full parallel job.
> > > I used the following Logstash query [1]  that checks the failures of
> the
> > > last
> > > 4 days (the last bug fix related with the full job was merged 4 days
> ago).
> > > These are the results:
> > >
> > > 123 failure (25% of the total)
> > >
> > > I took a sample of 50 failures and I obtained the following:
> > >
> > > 22% legitimate failures (they are due to the code change introduced by
> the
> > > patch)
> > > 22% infra issues
> > > 12% https://bugs.launchpad.net/openstack-ci/+bug/1291611
> > > 12% https://bugs.launchpad.net/tempest/+bug/1281969
> > > 8% https://bugs.launchpad.net/tempest/+bug/1294603
> > > 3% https://bugs.launchpad.net/neutron/+bug/1283522
> > > 3% https://bugs.launchpad.net/neutron/+bug/1291920
> > > 3% https://bugs.launchpad.net/nova/+bug/1290642
> > > 3% https://bugs.launchpad.net/tempest/+bug/1252971
> > > 3% https://bugs.launchpad.net/horizon/+bug/1257885
> > > 3% https://bugs.launchpad.net/tempest/+bug/1292242
> > > 3% https://bugs.launchpad.net/neutron/+bug/1277439
> > > 3% https://bugs.launchpad.net/neutron/+bug/1283599
> > >
> > > cheers,
> > >
> > > Rossella
> > >
> > > [1] http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOi
> > > BcImNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWZ1bGxcIiBBTkQgbWVzc2
> > > FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIgQU5EIHRhZ3M6Y29uc29sZSIsIm
> > > ZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiY3VzdG9tIiwiZ3
> > > JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7ImZyb20iOiIyMDE0LTAzLTIwVD
> > > EzOjU0OjI1KzAwOjAwIiwidG8iOiIyMDE0LTAzLTI0VDEzOjU0OjI1KzAwOj
> > > AwIiwidXNlcl9pbnRlcnZhbCI6IjAifSwibW9kZSI6IiIsImFuYWx5emVfZm
> > > llbGQiOiIiLCJzdGFtcCI6MTM5NTY3MDY2ODc0OX0=
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lis

Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-24 Thread Matthew Treinish
On Mon, Mar 24, 2014 at 09:56:09PM +0100, Salvatore Orlando wrote:
> Thanks a lot!
> 
> We now need to get on these bugs, and define with QA an acceptable failure
> rate criterion for switching the full job to voting.
> It would be good to have a chance to only run the tests against code which
> is already in master.
> To this aim we might push a dummy patch, and keep it spinning in the check
> queue.

Honestly, there isn't really a number. I had a thread trying to get consensus on
that back when I first made tempest run in parallel. What I ended up doing back
then and what we've done since for this kind of change is to just pick a slower
week for the gate and just green light it, of course after checking to make sure
if it blows up we're not blocking anything critical. If it looks like it's
passing at roughly the same rate as everything else and you guys think it's
ready. 25% is definitely too high, for comparison when I looked at a couple of
min. ago at the numbers for the past 4 days on the equivalent job with
nova-network it only failed 4% of the time. (12 out of 300) But that number does
fluctuate quite a bit for example looking at the past week the number grows to
11.6%. (171 out of 1480) 

Doing it this way doesn't seem like the best, but until it's gating things
really don't get the attention they deserve and more bugs will just slip in
while you wait. There will most likely be initial pain after it merges, but it's
the only real way to lock it down and make forward progress.

-Matt Treinish

> 
> 
> On 24 March 2014 21:45, Rossella Sblendido  wrote:
> 
> > Hello all,
> >
> > here is an update regarding the Neutron full parallel job.
> > I used the following Logstash query [1]  that checks the failures of the
> > last
> > 4 days (the last bug fix related with the full job was merged 4 days ago).
> > These are the results:
> >
> > 123 failure (25% of the total)
> >
> > I took a sample of 50 failures and I obtained the following:
> >
> > 22% legitimate failures (they are due to the code change introduced by the
> > patch)
> > 22% infra issues
> > 12% https://bugs.launchpad.net/openstack-ci/+bug/1291611
> > 12% https://bugs.launchpad.net/tempest/+bug/1281969
> > 8% https://bugs.launchpad.net/tempest/+bug/1294603
> > 3% https://bugs.launchpad.net/neutron/+bug/1283522
> > 3% https://bugs.launchpad.net/neutron/+bug/1291920
> > 3% https://bugs.launchpad.net/nova/+bug/1290642
> > 3% https://bugs.launchpad.net/tempest/+bug/1252971
> > 3% https://bugs.launchpad.net/horizon/+bug/1257885
> > 3% https://bugs.launchpad.net/tempest/+bug/1292242
> > 3% https://bugs.launchpad.net/neutron/+bug/1277439
> > 3% https://bugs.launchpad.net/neutron/+bug/1283599
> >
> > cheers,
> >
> > Rossella
> >
> > [1] http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOi
> > BcImNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWZ1bGxcIiBBTkQgbWVzc2
> > FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIgQU5EIHRhZ3M6Y29uc29sZSIsIm
> > ZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiY3VzdG9tIiwiZ3
> > JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7ImZyb20iOiIyMDE0LTAzLTIwVD
> > EzOjU0OjI1KzAwOjAwIiwidG8iOiIyMDE0LTAzLTI0VDEzOjU0OjI1KzAwOj
> > AwIiwidXNlcl9pbnRlcnZhbCI6IjAifSwibW9kZSI6IiIsImFuYWx5emVfZm
> > llbGQiOiIiLCJzdGFtcCI6MTM5NTY3MDY2ODc0OX0=
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-24 Thread Salvatore Orlando
Thanks a lot!

We now need to get on these bugs, and define with QA an acceptable failure
rate criterion for switching the full job to voting.
It would be good to have a chance to only run the tests against code which
is already in master.
To this aim we might push a dummy patch, and keep it spinning in the check
queue.

Salvatore


On 24 March 2014 21:45, Rossella Sblendido  wrote:

> Hello all,
>
> here is an update regarding the Neutron full parallel job.
> I used the following Logstash query [1]  that checks the failures of the
> last
> 4 days (the last bug fix related with the full job was merged 4 days ago).
> These are the results:
>
> 123 failure (25% of the total)
>
> I took a sample of 50 failures and I obtained the following:
>
> 22% legitimate failures (they are due to the code change introduced by the
> patch)
> 22% infra issues
> 12% https://bugs.launchpad.net/openstack-ci/+bug/1291611
> 12% https://bugs.launchpad.net/tempest/+bug/1281969
> 8% https://bugs.launchpad.net/tempest/+bug/1294603
> 3% https://bugs.launchpad.net/neutron/+bug/1283522
> 3% https://bugs.launchpad.net/neutron/+bug/1291920
> 3% https://bugs.launchpad.net/nova/+bug/1290642
> 3% https://bugs.launchpad.net/tempest/+bug/1252971
> 3% https://bugs.launchpad.net/horizon/+bug/1257885
> 3% https://bugs.launchpad.net/tempest/+bug/1292242
> 3% https://bugs.launchpad.net/neutron/+bug/1277439
> 3% https://bugs.launchpad.net/neutron/+bug/1283599
>
> cheers,
>
> Rossella
>
> [1] http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOi
> BcImNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWZ1bGxcIiBBTkQgbWVzc2
> FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIgQU5EIHRhZ3M6Y29uc29sZSIsIm
> ZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiY3VzdG9tIiwiZ3
> JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7ImZyb20iOiIyMDE0LTAzLTIwVD
> EzOjU0OjI1KzAwOjAwIiwidG8iOiIyMDE0LTAzLTI0VDEzOjU0OjI1KzAwOj
> AwIiwidXNlcl9pbnRlcnZhbCI6IjAifSwibW9kZSI6IiIsImFuYWx5emVfZm
> llbGQiOiIiLCJzdGFtcCI6MTM5NTY3MDY2ODc0OX0=
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev