Re: [openstack-dev] Spain Visa for Indian contributors

2016-09-14 Thread John Villalovos
On that page they seem to list a contact email:

eventv...@openstack.org

Hopefully they can help with the issue.

John

On Wed, Sep 14, 2016 at 9:24 PM, Kekane, Abhishek
 wrote:
> Hi John,
>
> I have read the information given at 
> https://www.openstack.org/summit/barcelona-2016/travel/#visa
> and got the invitation letter but it's in English language. Problem is that 
> Visa centers in India are demanding this invitation letter in English as well 
> as Spanish language.
>
> Thank you,
>
> Abhishek Kekane
>
> -Original Message-
> From: John Villalovos [mailto:openstack@sodarock.com]
> Sent: Thursday, September 15, 2016 9:45 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Spain Visa for Indian contributors
>
> There is information on the Visa process at:
> https://www.openstack.org/summit/barcelona-2016/travel/#visa
>
> Not sure if you have already read that information.
>
> They talk about the steps needed to get an invitation letter.
>
> Good luck!
>
>
> On Wed, Sep 14, 2016 at 8:51 PM, Kekane, Abhishek 
>  wrote:
>> Hi Devs, Stackers,
>>
>>
>>
>> While applying visa from Pune (India), I came to know that Invitation
>> letter is required in Spanish language and its mandatory.
>>
>> Has anyone from India has faced similar kind of issue while applying
>> for visa?
>>
>>
>>
>> If not please let me know from which city you have applied for the Visa.
>>
>>
>>
>>
>>
>> Thank you,
>>
>>
>>
>> Abhishek Kekane
>>
>>
>> __
>> Disclaimer: This email and any attachments are sent in strictest
>> confidence for the sole use of the addressee and may contain legally
>> privileged, confidential, and proprietary data. If you are not the
>> intended recipient, please advise the sender by replying promptly to
>> this email and then delete and destroy this email and any attachments
>> without any further use, copying or forwarding.
>>
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for Ocata cycle

2016-09-14 Thread Swapnil Kulkarni
On Mon, Sep 12, 2016 at 10:34 PM, Steven Dake (stdake)  wrote:
> To the OpenStack Community,
>
>
>
> Consider this email my self non-nomination for PTL of Kolla for
>
> the coming Ocata release.  I let the team know in our IRC team meeting
>
> several months ago I was passing the on baton at the conclusion of Newton,
>
> but I thought the broader OpenStack community would appreciate the
> information.
>
>
>
> I am super proud of what our tiny struggling community produced starting
>
> 3 years ago with only 3 people to the strongly emergent system that is Kolla
>
> with over 467 total contributors [1] since inception and closing in on 5,000
>
> commits today.
>
>
>
> In my opinion, the Kolla community is well on its way to conquering the last
>
> great challenge OpenStack faces: Making operational deployment management
> (ODM)
>
> of OpenStack cloud platforms straight-forward, easy, and most importantly
>
> cost effective for the long term management of OpenStack.
>
>
>
> The original objective the Kolla community set out to accomplish, deploying
>
> OpenStack in containers at 100 node scale has been achieved as proven by
> this
>
> review [2].  In these 12 scenarios, we were able to deploy with 3
>
> controllers, 100 compute nodes, and 20 storage nodes using Ceph for all
>
> storage and run rally as well as tempest against the deployment.
>
>
>
> Kolla is _No_Longer_a_Toy_ and _has_not_been_ since Liberty 1.1.0.
>
>
>
> I have developed a strong leadership pipeline and expect several candidates
>
> to self-nominate.  I wish all of them the best in the future PTL elections.
>
>
>
> Finally, I would like to thank all of the folks that have supported Kolla’s
>
> objectives.  If I listed the folks individually this email would be far too
>
> long, but you know who you are J Thank you for placing trust in my
> judgement.
>
>
>
> It has been a pleasure to serve as your leader.
>
>
>
> Regards
>
> -steak
>
>
>
> [1] http://stackalytics.com/report/contribution/kolla-group/2000
>
> [2] https://review.openstack.org/#/c/352101/
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Thank you, Steve, for your role in working with the team for creating
a perfect track to roll.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spain Visa for Indian contributors

2016-09-14 Thread Kekane, Abhishek
Hi John,

I have read the information given at 
https://www.openstack.org/summit/barcelona-2016/travel/#visa
and got the invitation letter but it's in English language. Problem is that 
Visa centers in India are demanding this invitation letter in English as well 
as Spanish language.

Thank you,

Abhishek Kekane

-Original Message-
From: John Villalovos [mailto:openstack@sodarock.com] 
Sent: Thursday, September 15, 2016 9:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Spain Visa for Indian contributors

There is information on the Visa process at:
https://www.openstack.org/summit/barcelona-2016/travel/#visa

Not sure if you have already read that information.

They talk about the steps needed to get an invitation letter.

Good luck!


On Wed, Sep 14, 2016 at 8:51 PM, Kekane, Abhishek  
wrote:
> Hi Devs, Stackers,
>
>
>
> While applying visa from Pune (India), I came to know that Invitation 
> letter is required in Spanish language and its mandatory.
>
> Has anyone from India has faced similar kind of issue while applying  
> for visa?
>
>
>
> If not please let me know from which city you have applied for the Visa.
>
>
>
>
>
> Thank you,
>
>
>
> Abhishek Kekane
>
>
> __
> Disclaimer: This email and any attachments are sent in strictest 
> confidence for the sole use of the addressee and may contain legally 
> privileged, confidential, and proprietary data. If you are not the 
> intended recipient, please advise the sender by replying promptly to 
> this email and then delete and destroy this email and any attachments 
> without any further use, copying or forwarding.
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spain Visa for Indian contributors

2016-09-14 Thread John Villalovos
There is information on the Visa process at:
https://www.openstack.org/summit/barcelona-2016/travel/#visa

Not sure if you have already read that information.

They talk about the steps needed to get an invitation letter.

Good luck!


On Wed, Sep 14, 2016 at 8:51 PM, Kekane, Abhishek
 wrote:
> Hi Devs, Stackers,
>
>
>
> While applying visa from Pune (India), I came to know that Invitation letter
> is required in Spanish language and its mandatory.
>
> Has anyone from India has faced similar kind of issue while applying  for
> visa?
>
>
>
> If not please let me know from which city you have applied for the Visa.
>
>
>
>
>
> Thank you,
>
>
>
> Abhishek Kekane
>
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Mike Bayer



On 09/14/2016 11:05 PM, Mike Bayer wrote:


Are *these* errors also new as of version 4.13.3 of oslo.db ?   Because
here I have more suspicion of one particular oslo.db change here.


The version in question that has the changes to provisioning and 
anything really to do with this area is 4.12.0.   So if you didn't see 
any problem w/ 4.12 then almost definitely oslo.db is not the cause - 
the code changes subsequent to 4.12 have no relationship to any system 
used by the opportunistic test base.I would hope at least that 4.12 
is the version where we see things changing because there were small 
changes to the provisioning code.


But at the same time, I'm combing through the quite small adjustments to 
the provisioning code as of 4.12.0 and I'm not seeing what could 
introduce this issue.   That said, we really should never see the kind 
of error we see with the "DROP DATABASE" failing because it remains in 
use, however this can be a side effect of the test itself having 
problems with the state of a different connection, not being closed and 
locks remain held.


That is, there's poor failure modes for sure here, I just can't see 
anything in 4.13 or even 4.12 that would suddenly introduce them.


By all means if these failures disappear when we go to 4.11 vs. 4.12, 
that would be where we need to go and to look for next cycle. From 
my POV if the failures do disappear then that would be the best evidence 
that the oslo.db version is the factor.











 fits much more with your initial description

On 09/14/2016 10:48 PM, Mike Bayer wrote:



On 09/14/2016 07:04 PM, Alan Pevec wrote:

Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.


So that means reverting all stable/newton changes, previous 4.13.x
have been already blocked https://review.openstack.org/365565
How would we proceed, do we need to revert all backport on
stable/newton?


In case my previous email wasn't clear, I don't *yet* see evidence that
the recent 4.13.3 release of oslo.db is the cause of this problem.
However, that is only based upon what I see in this stack trace, which
is that the test framework is acting predictably (though erroneously)
based on the timeout condition which is occurring.   I don't (yet) see a
reason that the same effect would not occur prior to 4.13.3 in the face
of a signal pre-empting the work of the pymysql driver mid-stream.
However, this assumes that the timeout condition itself is not a product
of the current oslo.db version and that is not known yet.

There's a list of questions that should all be answerable which could
assist in giving some hints towards this.

There's two parts to the error in the logs.  There's the "timeout"
condition, then there is the bad reaction of the PyMySQL driver and the
test framework as a result of the operation being interrupted within the
test.

* Prior to oslo.db 4.13.3, did we ever see this "timeout" condition
occur?   If so, was it also accompanied by the same "resource closed"
condition or did this second part of the condition only appear at 4.13.3?

* Did we see a similar "timeout" / "resource closed" combination prior
to 4.13.3, just with less frequency?

* Was the version of PyMySQL also recently upgraded (I'm assuming this
environment has been on PyMySQL for a long time at this point) ?   What
was the version change if so?  Especially if we previously saw "timeout"
but no "resource closed", perhaps an older version pf PyMySQL didn't
react in this way?

* Was the version of MySQL running in the CI environment changed?   What
was the version change if so?Were there any configurational changes
such as transaction isolation, memory or process settings?

* Have there been changes to the "timeout" logic itself in the test
suite, e.g. whatever it is that sets up fixtures.Timeout()?  Or some
change that alters how teardown of tests occurs when a test is
interrupted via this timeout?

* What is the magnitude of the "timeout" this fixture is using, is it on
the order of seconds, minutes, hours?

* If many minutes or hours, can the test suite be observed to be stuck
on this test?   Has someone tried to run a "SHOW PROCESSLIST" while this
condition is occurring to see what SQL is pausing?

* Has there been some change such that the migration tests are running
against non-empty tables or tables with much more data than was present
before?

* Is this failure only present within the Nova test suite or has it been
observed in the test suites of other projects?

* Is this failure present only on the "database migration" test suite or
is it present in other opportunistic tests, for Nova and others?

* Have there been new database migrations added to Nova which are being
exercised here and may be involved?

I'm not sure how much of an inconvenience it is to downgrade oslo.db. If
downgrading it is feasible, that would at least be a way to eliminate it
as a 

[openstack-dev] Spain Visa for Indian contributors

2016-09-14 Thread Kekane, Abhishek
Hi Devs, Stackers,

While applying visa from Pune (India), I came to know that Invitation letter is 
required in Spanish language and its mandatory.
Has anyone from India has faced similar kind of issue while applying  for visa?

If not please let me know from which city you have applied for the Visa.


Thank you,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Mike Bayer
There's a different set of logs attached to the launchpad issue, that's 
not what I was looking at before.


These logs are at 
http://logs.openstack.org/90/369490/1/check/gate-nova-tox-db-functional-ubuntu-xenial/085ac3e/console.html#_2016-09-13_14_54_18_098031 
.In these logs, I see something *very* different, not just the MySQL 
tests but the Postgresql tests are definitely hitting conflicts against 
the randomly generated database.


This set of traces, e.g.:

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) database 
"dbzrtmgbxv" is being accessed by other users
2016-09-13 14:54:18.093723 | DETAIL:  There is 1 other session using 
the database.

2016-09-13 14:54:18.093736 |  [SQL: 'DROP DATABASE dbzrtmgbxv']

and

File 
"/home/jenkins/workspace/gate-nova-tox-db-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", 
line 668, in _rollback_impl
2016-09-13 14:54:18.095470 | 
self.engine.dialect.do_rollback(self.connection)
2016-09-13 14:54:18.095513 |   File 
"/home/jenkins/workspace/gate-nova-tox-db-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", 
line 420, in do_rollback

2016-09-13 14:54:18.095526 | dbapi_connection.rollback()
2016-09-13 14:54:18.095548 | sqlalchemy.exc.InterfaceError: 
(psycopg2.InterfaceError) connection already closed


are a very different animal. For one thing, they're on Postgresql where 
the driver and DB acts extremely rationally.   For another, there's no 
timeout exception here, and not all the conflicts are within the teardown.


Are *these* errors also new as of version 4.13.3 of oslo.db ?   Because 
here I have more suspicion of one particular oslo.db change here.








 fits much more with your initial description

On 09/14/2016 10:48 PM, Mike Bayer wrote:



On 09/14/2016 07:04 PM, Alan Pevec wrote:

Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.


So that means reverting all stable/newton changes, previous 4.13.x
have been already blocked https://review.openstack.org/365565
How would we proceed, do we need to revert all backport on stable/newton?


In case my previous email wasn't clear, I don't *yet* see evidence that
the recent 4.13.3 release of oslo.db is the cause of this problem.
However, that is only based upon what I see in this stack trace, which
is that the test framework is acting predictably (though erroneously)
based on the timeout condition which is occurring.   I don't (yet) see a
reason that the same effect would not occur prior to 4.13.3 in the face
of a signal pre-empting the work of the pymysql driver mid-stream.
However, this assumes that the timeout condition itself is not a product
of the current oslo.db version and that is not known yet.

There's a list of questions that should all be answerable which could
assist in giving some hints towards this.

There's two parts to the error in the logs.  There's the "timeout"
condition, then there is the bad reaction of the PyMySQL driver and the
test framework as a result of the operation being interrupted within the
test.

* Prior to oslo.db 4.13.3, did we ever see this "timeout" condition
occur?   If so, was it also accompanied by the same "resource closed"
condition or did this second part of the condition only appear at 4.13.3?

* Did we see a similar "timeout" / "resource closed" combination prior
to 4.13.3, just with less frequency?

* Was the version of PyMySQL also recently upgraded (I'm assuming this
environment has been on PyMySQL for a long time at this point) ?   What
was the version change if so?  Especially if we previously saw "timeout"
but no "resource closed", perhaps an older version pf PyMySQL didn't
react in this way?

* Was the version of MySQL running in the CI environment changed?   What
was the version change if so?Were there any configurational changes
such as transaction isolation, memory or process settings?

* Have there been changes to the "timeout" logic itself in the test
suite, e.g. whatever it is that sets up fixtures.Timeout()?  Or some
change that alters how teardown of tests occurs when a test is
interrupted via this timeout?

* What is the magnitude of the "timeout" this fixture is using, is it on
the order of seconds, minutes, hours?

* If many minutes or hours, can the test suite be observed to be stuck
on this test?   Has someone tried to run a "SHOW PROCESSLIST" while this
condition is occurring to see what SQL is pausing?

* Has there been some change such that the migration tests are running
against non-empty tables or tables with much more data than was present
before?

* Is this failure only present within the Nova test suite or has it been
observed in the test suites of other projects?

* Is this failure present only on the "database migration" test suite or
is it present in other opportunistic tests, 

Re: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for Ocata cycle

2016-09-14 Thread duon...@vn.fujitsu.com
Hi sdake (or steak? whatever)

Thank you very much for your effort driving Kolla to good shape and building 
leadership pipeline.

Wish you can work with Kolla team as long as possible.

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: Tuesday, September 13, 2016 12:05 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for 
Ocata cycle

To the OpenStack Community,

Consider this email my self non-nomination for PTL of Kolla for
the coming Ocata release.  I let the team know in our IRC team meeting
several months ago I was passing the on baton at the conclusion of Newton,
but I thought the broader OpenStack community would appreciate the information.

I am super proud of what our tiny struggling community produced starting
3 years ago with only 3 people to the strongly emergent system that is Kolla
with over 467 total contributors [1] since inception and closing in on 5,000
commits today.

In my opinion, the Kolla community is well on its way to conquering the last
great challenge OpenStack faces: Making operational deployment management (ODM)
of OpenStack cloud platforms straight-forward, easy, and most importantly
cost effective for the long term management of OpenStack.

The original objective the Kolla community set out to accomplish, deploying
OpenStack in containers at 100 node scale has been achieved as proven by this
review [2].  In these 12 scenarios, we were able to deploy with 3
controllers, 100 compute nodes, and 20 storage nodes using Ceph for all
storage and run rally as well as tempest against the deployment.

Kolla is _No_Longer_a_Toy_ and _has_not_been_ since Liberty 1.1.0.

I have developed a strong leadership pipeline and expect several candidates
to self-nominate.  I wish all of them the best in the future PTL elections.

Finally, I would like to thank all of the folks that have supported Kolla’s
objectives.  If I listed the folks individually this email would be far too
long, but you know who you are ☺ Thank you for placing trust in my judgement.

It has been a pleasure to serve as your leader.

Regards
-steak

[1] http://stackalytics.com/report/contribution/kolla-group/2000
[2] https://review.openstack.org/#/c/352101/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Mike Bayer



On 09/14/2016 07:04 PM, Alan Pevec wrote:

Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.


So that means reverting all stable/newton changes, previous 4.13.x
have been already blocked https://review.openstack.org/365565
How would we proceed, do we need to revert all backport on stable/newton?


In case my previous email wasn't clear, I don't *yet* see evidence that 
the recent 4.13.3 release of oslo.db is the cause of this problem. 
However, that is only based upon what I see in this stack trace, which 
is that the test framework is acting predictably (though erroneously) 
based on the timeout condition which is occurring.   I don't (yet) see a 
reason that the same effect would not occur prior to 4.13.3 in the face 
of a signal pre-empting the work of the pymysql driver mid-stream. 
However, this assumes that the timeout condition itself is not a product 
of the current oslo.db version and that is not known yet.


There's a list of questions that should all be answerable which could 
assist in giving some hints towards this.


There's two parts to the error in the logs.  There's the "timeout" 
condition, then there is the bad reaction of the PyMySQL driver and the 
test framework as a result of the operation being interrupted within the 
test.


* Prior to oslo.db 4.13.3, did we ever see this "timeout" condition 
occur?   If so, was it also accompanied by the same "resource closed" 
condition or did this second part of the condition only appear at 4.13.3?


* Did we see a similar "timeout" / "resource closed" combination prior 
to 4.13.3, just with less frequency?


* Was the version of PyMySQL also recently upgraded (I'm assuming this 
environment has been on PyMySQL for a long time at this point) ?   What 
was the version change if so?  Especially if we previously saw "timeout" 
but no "resource closed", perhaps an older version pf PyMySQL didn't 
react in this way?


* Was the version of MySQL running in the CI environment changed?   What 
was the version change if so?Were there any configurational changes 
such as transaction isolation, memory or process settings?


* Have there been changes to the "timeout" logic itself in the test 
suite, e.g. whatever it is that sets up fixtures.Timeout()?  Or some 
change that alters how teardown of tests occurs when a test is 
interrupted via this timeout?


* What is the magnitude of the "timeout" this fixture is using, is it on 
the order of seconds, minutes, hours?


* If many minutes or hours, can the test suite be observed to be stuck 
on this test?   Has someone tried to run a "SHOW PROCESSLIST" while this 
condition is occurring to see what SQL is pausing?


* Has there been some change such that the migration tests are running 
against non-empty tables or tables with much more data than was present 
before?


* Is this failure only present within the Nova test suite or has it been 
observed in the test suites of other projects?


* Is this failure present only on the "database migration" test suite or 
is it present in other opportunistic tests, for Nova and others?


* Have there been new database migrations added to Nova which are being 
exercised here and may be involved?


I'm not sure how much of an inconvenience it is to downgrade oslo.db. 
If downgrading it is feasible, that would at least be a way to eliminate 
it as a possibility if these same failures continue to occur, or a way 
to confirm its involvement if they disappear.   But if downgrading is 
disruptive then there are other things to look at in order to have a 
better chance at predicting its involvement.






Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI is currently down: 2 blockers

2016-09-14 Thread Emilien Macchi
Hi,

Just a heads-up before end of day:

1) multinode job is failing 80% of time. James and myself did some
attempts to revert or fix things but we have been unfortunate until
now.
Everything is documented here: https://bugs.launchpad.net/tripleo/+bug/1623606

2) ovb jobs are timeing out during NetworkDeployment because
99-refresh-completed is not signaling to Heat due to instance-id being
detected as null by os-apply-config.
James proposed a revert: https://review.openstack.org/#/c/370250/
But the patch can't be merged because of 1).

I'll continue to work on it tomorrow but if you're able to jump in and
make progress on it, this downtime is very critical at this stage of
the cycle.

Any help is highly welcome.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Question about fixing missing soft deleted rows

2016-09-14 Thread Matt Riedemann

I'm looking for other input on a question I have in this change:

https://review.openstack.org/#/c/345191/4/nova/db/sqlalchemy/api.py

We've had a few patches like this where we don't (soft) delete entries 
related to an instance when that instance record is (soft) deleted. 
These then cause the archive command to fail because of the referential 
constraint.


Then we go in and add a new entry in the instance_destroy method so we 
start (soft) deleting *new* things, but we don't cleanup anything old.


In the change above this is working around the fact we might have 
lingering consoles entries for an instance that's being archived.


One suggestion I made was adding a database migration that soft deletes 
any console entries where the related instance is deleted (deleted != 
0). Is that a bad idea? It's not a schema migration, it's data cleanup 
so archive works. We could do the same thing with a nova-manage command, 
but we don't know that someone has run it like they do with the DB 
migrations.


Another idea is doing it in the nova-manage db online_data_migrations 
command which should be run on upgrade. If we landed something like that 
in say Ocata, then we could remove the TODO in the archive code in Pike.


Other thoughts?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Docs][PTL] PTL Candidacy for Docs

2016-09-14 Thread Lana Brindley
Hi everyone,

I am seeking re-election as Documentation PTL for Ocata. I have been the docs
PTL since Liberty, and together we've converted the entire docs suite to RST,
decommissioned the old DocBook infrastructure, and fundamentally changed the
way projects document their installation procedures. Ocata is a short cycle,
so I want to take this opportunity to focus inwards on our community. I would
love your support to do this in Ocata.

In Mitaka we embarked on an ambitious plan to convert our entire documentation
suite to RST, and in Newton we put the finishing touches to that project by
decommissioning the old DocBook infrastructure. With that long-term project
complete, we have gone on to develop a new publication model that allows
projects to write documentation in their own repos, and publish seamlessly to
the docs.openstack.org front page. This project goes live for the Installation
Guide and API docs with Newton, and means our documentation efforts are
continuing to scale out to represent a much greater proportion of products,
contributors, operators, and users. I would like to continue this work in
Ocata by evaluating the success of the project, and identifying more efficient
and productive pathways for users to find and use the information provided by
the docs team. I also want to explore more innovative ways for the
documentation team to work together, including a review of the speciality team
and release processes. I want to continue to foster collaboration between
developers and writers to provide the best possible documentation for the
entire OpenStack community.

During Newton, in addition to the Installation Guide changes, we focussed on
streamlining our processes, clarifying the way we operate, consolidating
guides, and general editing and tidying up. I want to continue this work in
Ocata to provide a better documentation experience for everyone: writers,
developers, operators, and users.

In the Liberty release cycle, we closed just over 600 bugs, and in Mitaka we
closed 645. With the rate of new bug creation rapidly dropping thanks to
procedural changes we made in Mitaka, our consistent effort has really paid
down a lot of our technical debt. For Newton so far, we've closed well over
400 bugs, and we have only 327 bugs in the queue as I write this, so this
positive trend continues.

I'll be making the trek to Barcelona soon for the Ocata summit, to catch up
with old friends and hopefully meet some of our newest contributors. Please be
sure to stop for a chat if you see me around. I'm also excited about the new
Project Team Gathering coming up in February; I think this will be a great
opportunity for cross-project teams like docs.

I'd love to have your support for the PTL role for Ocata. I'm looking forward
to continuing to grow the documentation team and to keep working on these
incremental improvements to content delivery.

Thanks,
Lana (@loquacities)

https://review.openstack.org/#/c/370477/

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Clint Byrum
Excerpts from Henry Nash's message of 2016-09-15 00:29:44 +0100:
> Jay,
> 
> I agree with your distinction - and when I am referring to rolling upgrades 
> for keystone I am referring to when you are running a cluster of keystones 
> (for performance and/or redundancy), and you want to roll the upgrade across 
> the cluster without creating downtime of the overall keystone service. Such a 
> keystone cluster deployment will be common in large clouds - and prior to 
> Newton, keystone did not support such a rolling upgrade (you had to take all 
> the nodes down, upgrade the DB and then boot them all back up). In order to 
> support such a rolling upgrade you either need to have code that can work on 
> different DB versions (either explicitly or via versioned objects), or you 
> hide the schema changes by “data synchronisation via Triggers”, which is 
> where this whole thread came from.
> 

It doesn't always need to be explicit or through versioned objects. One
can often manipulate the schema and even migrate data without disturbing
old code.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][elections][ptl] Kolla PTL Candidacy for Jeffrey Zhang

2016-09-14 Thread Hui Kang
+1 for Jeffrey Zhang
- Jeffrey is full time on Kolla
- Steven is doing excellent job, but I would like to see some change.

Thanks.

- Hui

On Mon, Sep 12, 2016 at 3:18 AM, Jeffrey Zhang  wrote:
> Hi Everyone,
>
> I'm excited to announce my candidacy for PTL for Kolla team during the Ocata
> cycle.
>
> Kolla is a fantastic project. It brings fresh new blood to OpenStack 
> Deployment
> Management. Simplifying the lives of Operators when managing OpenStack is
> essential to OpenStack’s success and I personally believe Kolla as of Liberty
> 1.1.0 delivers on that promise. I also deployed several Kolla production
> environment for customers without any problem. I've been a core contributor to
> Kolla since Mitaka. I am full time on Kolla project and have been heavily
> involved with all of my waking hours[0][1].
>
> Over the Newton development cycle, we have implemented numerous new features
> and improved stability and usability. The top features are:
>
> * Introduced the Bifrost and kolla-host to manage the bare metal provision and
>   initialize the node before deploying OpenStack.
> * The introduction of far more services including Aodh, Bifrost, Ceilometer,
>   Cloudkitty, Multipath, Rally, Sahara, Tempest, Watcher.
> * Implemented Dockerfile customization.
> * Launched the kolla-kubernetes project.
> * More robust CI jobs
>
> As a team, the Kolla Community tested 123 nodes for Kolla using osic[2]. The
> results validate Kolla works and scales to a majority of use cases today. The
> major code paths that enable this scalability have been in place since Liberty
> which gives a good indicator of Liberty and Mitaka scalability. As a Kolla
> core reviewer and PTL self-nominee, I find this result to be highly 
> satisfying.
> One of Kolla’s many goals has been executed: Deploying OpenStack at 100+ node
> count fast and easily.
>
> The kolla community is diversely affiliated, a fantastic crew of contributors,
> and excellent leadership within the core reviewer team.
>
> For Ocata, I'd like the project focused on these objectives:
>
> * Focus on the needs of the Kolla team.
> * Optimize the speed of reconfiguration and upgrade.
> * Implement and integrate with more driver plugins for neutron and cinder.
> * Deliver 1.0.0 of kolla-kubernetes.
> * Implement different CI jobs to test diverse scenarios.I’d like to start out
>   with some really hard CI problems such as testing real upgrades and
>   validating ceph in the CI jobs per commit.
> * Support the implementation of a monitoring stack.
>
> Finally, I know it is important that PTL time is spent on the non-technical
> problem-solving such as mentoring potential core reviewers, scheduling the
> project progress, interacting with other OpenStack projects and many other
> activities a PTL undertakes.  I’d like to work this cycle on scaling those
> activities across the core reviewer team.  I will use my personal strengths 
> and
> rely on the core reviewer team’s personal strengths to make Kolla Ocata the
> best release yet.
>
> Thank you for the considering me to serve as your Kolla PTL.
>
> Regards,
> Jeffrey Zhang
>
> [0] http://stackalytics.com/?release=all=kolla=commits
> [1] http://stackalytics.com/?release=all=kolla=marks
> [2] https://review.openstack.org/352101
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Henry Nash
Jay,

I agree with your distinction - and when I am referring to rolling upgrades for 
keystone I am referring to when you are running a cluster of keystones (for 
performance and/or redundancy), and you want to roll the upgrade across the 
cluster without creating downtime of the overall keystone service. Such a 
keystone cluster deployment will be common in large clouds - and prior to 
Newton, keystone did not support such a rolling upgrade (you had to take all 
the nodes down, upgrade the DB and then boot them all back up). In order to 
support such a rolling upgrade you either need to have code that can work on 
different DB versions (either explicitly or via versioned objects), or you hide 
the schema changes by “data synchronisation via Triggers”, which is where this 
whole thread came from.

Henry
> On 14 Sep 2016, at 23:08, Jay Pipes  wrote:
> 
> On 09/01/2016 05:29 AM, Henry Nash wrote:
>> So as the person who drove the rolling upgrade requirements into
>> keystone in this cycle (because we have real customers that need it),
>> and having first written the keystone upgrade process to be
>> “versioned object ready” (because I assumed we would do this the same
>> as everyone else), and subsequently re-written it to be “DB Trigger
>> ready”…and written migration scripts for both these cases for the (in
>> fact very minor) DB changes that keystone has in Newton…I guess I
>> should also weigh in here :-)
> 
> Sorry for delayed response. PTO and all... I'd just like to make a 
> clarification here. Henry, you are not referring to *rolling upgrades* but 
> rather *online database migrations*. There's an important distinction between 
> the two concepts.
> 
> Online schema migrations, as discussed in this thread, are all about 
> minimizing the time that a database server is locked or otherwise busy 
> performing the tasks of changing SQL schemas and moving the underlying stored 
> data from their old location/name to their new location/name. As noted in 
> this thread, there's numerous ways of reducing the downtime experienced 
> during these data and schema migrations.
> 
> Rolling upgrades are not the same thing, however. What rolling upgrades refer 
> to is the ability of a *distributed system* to have its distributed component 
> services running different versions of the software and still be able to 
> communicate with the other components of the system. This time period during 
> which the components of the distributed system may run different versions of 
> the software may be quite lengthy (days or weeks long). The "rolling" part of 
> "rolling upgrade" refers to the fact that in a distributed system of 
> thousands of components or nodes, the upgraded software must be "rolled out" 
> to those thousands of nodes over a period of time.
> 
> Glance and Keystone do not participate in a rolling upgrade, because Keystone 
> and Glance do not have a distributed component architecture. Online data 
> migrations will reduce total downtime experienced during an *overall upgrade 
> procedure* for an OpenStack cloud, but Nova, Neutron and Cinder are the only 
> parts of OpenStack that are going to participate in a rolling upgrade because 
> they are the services that are distributed across all the many compute nodes.
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Bashmakov, Alexander
> Glance and Keystone do not participate in a rolling upgrade, because
> Keystone and Glance do not have a distributed component architecture.
> Online data migrations will reduce total downtime experienced during an
> *overall upgrade procedure* for an OpenStack cloud, but Nova, Neutron and
> Cinder are the only parts of OpenStack that are going to participate in a 
> rolling
> upgrade because they are the services that are distributed across all the
> many compute nodes.

Hi Jay,
I'd like to better understand why your definition of rolling upgrades excludes 
Glance and Keystone? Granted they don't run multiple disparate components over 
distributed systems, however, they can still run the same service on multiple 
distributed nodes. So a rolling upgrade can still be applied on a large cloud 
that has, for instance 50 Glance nodes.  In this case different versions of the 
same service will run on different nodes simultaneously.
Regards,
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] TAP functionality

2016-09-14 Thread Cathy Zhang
Hi Subrahmanyam,

Yes, we can add this support. Would you like to join next project IRC meeting 
to discuss and sync up with the team on the requirement and the tapaas 
integration with networking-sfc?

Thanks,
Cathy
From: Subrahmanyam Ongole [mailto:song...@oneconvergence.com]
Sent: Wednesday, September 14, 2016 4:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][networking-sfc] TAP functionality

Hi Cathy

Is sfc project going to address TAP/Monitor type of packet redirection, where a 
copy of the packet is made and sent to the specified port? Is it targeted for 
any specific release - N, O etc?

If it is not available in sfc, does tapaas solution work with sfc? Any thoughts 
you could share? Thanks

--

Thanks
(Subrahmanyam Ongole)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-sfc] TAP functionality

2016-09-14 Thread Subrahmanyam Ongole
Hi Cathy

Is sfc project going to address TAP/Monitor type of packet redirection,
where a copy of the packet is made and sent to the specified port? Is it
targeted for any specific release - N, O etc?

If it is not available in sfc, does tapaas solution work with sfc? Any
thoughts you could share? Thanks

-- 

Thanks
(Subrahmanyam Ongole)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano][ptl] Announcing my candidacy for PTL for Murano for the Ocala cycle

2016-09-14 Thread Kirill Zaitsev
Hello everyone, here is my announcement to run for Murano PTL for Ocata.

It has been my pleasure to serve as Murano PTL in Newton and I would like to 
offer my services to the team and the community once more. Looking back at all 
the challenges and achievements of Newton cycle I feel very proud to be part of 
Murano. We’ve made a great step forward in integrating with glare and have set 
up (now green) glare-integration jobs for nearly every murano component. Work 
on both multi-region apps and app development framework took a leap forward and 
they’re nearly finished. We’ve started work on supporting and documenting 
installation of murano from RDO and UCA. And those are just the tip of the 
iceberg.

Here are few challenges and changes I’d like to implement in Ocata

* Switch murano to ‘release-with-intermediary’ model. Probably the biggest 
change on the list. The rationale behind this change is to make releases more 
frequent and less in size to give our users access to new features faster. 
Release automation through openstack/releases repository has dramatically 
reduced the cost of making a release and would allow us to concentrate more on 
features, release often and get user feedback sooner. Secondary activity here 
would be to allow current murano support installations of a previous version of 
OpenStack. This is already the case API-wise, but might pose a certain 
challenge code-wise.

* Continue work and support of murano packages for major distributions. We have 
awesome packages for Debian (thanks zigo!), but we also need to have up-to-date 
UCA and RDO packages. This will be twice as important with the new release 
model. Furthermore we’d need to incorporate these installation guides into our 
documentation. The final goal here is to make them to be the default way to 
install murano.

* Documentation. As a continuation of #2 — there’s quite some work we need to 
do regarding our docs. App Framework, Garbage Collection, Multi-region apps, 
Installation from packages, YAQL library (even though it’s not part of murano) 
— all these features need some documentation love shead onto them. 

* Continue with the App Catalog and Glare integrations. We’ve had some progress 
in both directions, but there is still a lot to be done. Hopefuly with glare 
moving to a separate repository and AppCatalog-Glare integration underway we’ll 
be able to finally switch to v1 Glare API in Ocata

* We’re halfway there with the OSC integration and I’d really love to see it 
finished during Ocata and finally release the 1.0.0 client as soon as that is 
done. =)

Regardless of the election result — I’m looking forward to Ocata and would try 
my best to help Murano and the community around it thrive.

Cheers, Kirill (kzaitsev_ws)

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] App Catalog IRC meeting Thursday September 15th

2016-09-14 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for September 15th
at 17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Tomorrow we will be talking more about the next steps we will be
taking in implementing GLARE as a back-end for the Community App
Catalog.  We are very close to merging a bunch of patches that will
make this possible, which will be a big benefit to the App Catalog.

Hope to see you there tomorrow!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cue][qa] Status of OpenStack CI jobs

2016-09-14 Thread Ken'ichi Ohmichi
Hi Cue-team,

As http://status.openstack.org/openstack-health/#/ , the cue gate jobs
continues failing 100%.
What is current status of the development?
Hopefully the  job will become stable for smooth development.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Clark Boylan
On Wed, Sep 14, 2016, at 10:01 AM, Sean Dague wrote:
> On 09/14/2016 12:06 PM, Roman Podoliaka wrote:
> > Hmm, looks like we now run more testr workers in parallel (8 instead of 4):
> > 
> > http://logs.openstack.org/76/335676/7/check/gate-nova-python34-db/6841fce/console.html.gz
> > http://logs.openstack.org/62/369862/3/check/gate-nova-python27-db-ubuntu-xenial/2784de9/console.html
> > 
> > On my machine running Nova migration tests against MySQL is much
> > slower with 8 workers than with 4 due to disk IO (it's HDD). When they
> > time out (after 320s) I see the very same TimeoutException and
> > IndexError (probably something messes up with TimeoutException up the
> > stack).
> 
> Yes, by default testr runs with the number of workers matching the # of
> cpus on the target. I think all our cloud providers are now 8 cpu
> guests. So unit / functional tests are running 8 way. That's been true
> for quite a while IIRC.

OSIC is 4vcpu and the others have 8vcpu.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Alan Pevec
> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
> think we need to strongly consider blocking it and revisiting these
> issues post newton.

So that means reverting all stable/newton changes, previous 4.13.x
have been already blocked https://review.openstack.org/365565
How would we proceed, do we need to revert all backport on stable/newton?

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Announcing my candidacy for PTL of the Ocata cycle

2016-09-14 Thread Diana Clarke
Fantastic news! I had hoped you'd run again.

While I haven't been around Nova all that long, I've really enjoyed
the approachable & jovial tone you bring to #openstack-nova.

And your work ethic speaks for itself. Thanks Matt!

--diana

On Wed, Sep 14, 2016 at 5:38 PM, Matt Riedemann
 wrote:
>
> Hi everyone,
>
> This is my self-nomination to continue running as Nova PTL for the Ocata 
> cycle.
>
> Looking back at Newton, the Nova team accomplished a lot. Random things that
> stick out for me:
>
> * Cells v2 in a single-cell deployment plus CI jobs.
> * Placement API for tracking quantitative resources on compute nodes.
> * Dedicated voting live migration CI job.
> * os-vif integration for libvirt (linuxbridge and OVS backends).
> * Glance v2 integration.
> * nova-network deprecation (for real this time!)
> * Server tags support (v2.26).
> * Virtual device tags for libvirt and hyper-v (v2.32).
> * Proxy API deprecation (v2.36).
> * Get me a network (v2.37).
> * API policy defaults in code.
> * api-ref docs in tree and massively cleaned up.
> * Continued cleanup and documentation of the configuration options.
> * Ironic multiple compute host support.
> * Ironic multiple tenant networking support.
> * Libvirt live migration post-copy mode.
> * Vendordata (version 2) API for the metadata service.
> * Several notifications are now versioned.
> * os-brick + oslo.privsep support.
> * Improved third party CI for NFV (Intel and Mellanox).
> * Deferred fixed IP allocation for Neutron ports.
> * gate-tempest-dsvm-lvm job in the experimental queue.
> * plus a lot more (100 approved blueprints, 64 complete or partially complete)
>
> I think we have some momentum going into Ocata, which is a shorter release, to
> continue some of the work from Newton. My personal priorities for Ocata are:
>
> * Continue work on cells v2, specifically around multi-cell support and making
>   cells v2 required in Nova deployments for the Ocata release.
> * Continue work on the placement API, including making it required for Nova
>   deployments by the Ocata release. There will also be work on modeling
>   capabilities for the placement service.
> * The libvirt imagebackend refactor which is a prerequisite for implementing
>   libvirt storage pools is going to need focus in Ocata. This was the one
>   priority from Newton that did not really "make it" and I want to come up 
> with
>   a plan to push that forward, which probably includes dedicated review focus,
>   improved test coverage (non-Tempest integration testing), and milestones to
>   make sure we are staying on track.
> * Discoverable API support which builds off the policy in code and 
> capabilities
>   modeling work.
> * There were several blueprints which were close in Newton but missed the cut
>   due to the non-priority feature freeze and I think we can get those in early
>   in Ocata.
> * Python 3 support since that's a cross-project OpenStack effort for Ocata.
> * We will need to start taking a deeper look at improving workflows between
>   Nova/Cinder and Nova/Neutron. For Cinder this means the simplified API stack
>   that John Griffith has been POCing which is a prerequisite for volume
>   multiattach support. For Neutron this means improved coordination of tasks
>   like during live migration. John Garbutt has already started working with 
> the
>   Neutron team on this.
>
> Like in Newton, I would like to restrict spec reviews mainly to re-approvals
> early so that we can close out things that are nearly already ready to go
> before accepting new work, especially with the shorter cycle.
>
> I will continue to foster the subteams that are working and holding meetings 
> in
> Nova. I think that has been going well and helps focus efforts which do not
> have a dedicated core involved on a daily basis. I want to also continue doing
> regular status checks in the mailing list and recaps of events, be those
> summit sessions or even hangout sessions, so that the entire team is on the
> same page and we have clear communication.
>
> This is definitely not a one person show and I owe a lot to the people working
> in Nova every day. As I said in my Newton nomination I want to be PTL to help
> manage the project forward in Ocata and keep the development team focused on
> getting work done, but I definitely can not and will not do that alone.
>
> Thanks for your consideration.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Jay Pipes

On 09/01/2016 05:29 AM, Henry Nash wrote:

So as the person who drove the rolling upgrade requirements into
keystone in this cycle (because we have real customers that need it),
and having first written the keystone upgrade process to be
“versioned object ready” (because I assumed we would do this the same
as everyone else), and subsequently re-written it to be “DB Trigger
ready”…and written migration scripts for both these cases for the (in
fact very minor) DB changes that keystone has in Newton…I guess I
should also weigh in here :-)


Sorry for delayed response. PTO and all... I'd just like to make a 
clarification here. Henry, you are not referring to *rolling upgrades* 
but rather *online database migrations*. There's an important 
distinction between the two concepts.


Online schema migrations, as discussed in this thread, are all about 
minimizing the time that a database server is locked or otherwise busy 
performing the tasks of changing SQL schemas and moving the underlying 
stored data from their old location/name to their new location/name. As 
noted in this thread, there's numerous ways of reducing the downtime 
experienced during these data and schema migrations.


Rolling upgrades are not the same thing, however. What rolling upgrades 
refer to is the ability of a *distributed system* to have its 
distributed component services running different versions of the 
software and still be able to communicate with the other components of 
the system. This time period during which the components of the 
distributed system may run different versions of the software may be 
quite lengthy (days or weeks long). The "rolling" part of "rolling 
upgrade" refers to the fact that in a distributed system of thousands of 
components or nodes, the upgraded software must be "rolled out" to those 
thousands of nodes over a period of time.


Glance and Keystone do not participate in a rolling upgrade, because 
Keystone and Glance do not have a distributed component architecture. 
Online data migrations will reduce total downtime experienced during an 
*overall upgrade procedure* for an OpenStack cloud, but Nova, Neutron 
and Cinder are the only parts of OpenStack that are going to participate 
in a rolling upgrade because they are the services that are distributed 
across all the many compute nodes.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Announcing my candidacy for PTL of the Ocata cycle

2016-09-14 Thread Matt Riedemann

Hi everyone,

This is my self-nomination to continue running as Nova PTL for the Ocata 
cycle.


Looking back at Newton, the Nova team accomplished a lot. Random things that
stick out for me:

* Cells v2 in a single-cell deployment plus CI jobs.
* Placement API for tracking quantitative resources on compute nodes.
* Dedicated voting live migration CI job.
* os-vif integration for libvirt (linuxbridge and OVS backends).
* Glance v2 integration.
* nova-network deprecation (for real this time!)
* Server tags support (v2.26).
* Virtual device tags for libvirt and hyper-v (v2.32).
* Proxy API deprecation (v2.36).
* Get me a network (v2.37).
* API policy defaults in code.
* api-ref docs in tree and massively cleaned up.
* Continued cleanup and documentation of the configuration options.
* Ironic multiple compute host support.
* Ironic multiple tenant networking support.
* Libvirt live migration post-copy mode.
* Vendordata (version 2) API for the metadata service.
* Several notifications are now versioned.
* os-brick + oslo.privsep support.
* Improved third party CI for NFV (Intel and Mellanox).
* Deferred fixed IP allocation for Neutron ports.
* gate-tempest-dsvm-lvm job in the experimental queue.
* plus a lot more (100 approved blueprints, 64 complete or partially 
complete)


I think we have some momentum going into Ocata, which is a shorter 
release, to

continue some of the work from Newton. My personal priorities for Ocata are:

* Continue work on cells v2, specifically around multi-cell support and 
making

  cells v2 required in Nova deployments for the Ocata release.
* Continue work on the placement API, including making it required for Nova
  deployments by the Ocata release. There will also be work on modeling
  capabilities for the placement service.
* The libvirt imagebackend refactor which is a prerequisite for implementing
  libvirt storage pools is going to need focus in Ocata. This was the one
  priority from Newton that did not really "make it" and I want to come 
up with
  a plan to push that forward, which probably includes dedicated review 
focus,
  improved test coverage (non-Tempest integration testing), and 
milestones to

  make sure we are staying on track.
* Discoverable API support which builds off the policy in code and 
capabilities

  modeling work.
* There were several blueprints which were close in Newton but missed 
the cut
  due to the non-priority feature freeze and I think we can get those 
in early

  in Ocata.
* Python 3 support since that's a cross-project OpenStack effort for Ocata.
* We will need to start taking a deeper look at improving workflows between
  Nova/Cinder and Nova/Neutron. For Cinder this means the simplified 
API stack

  that John Griffith has been POCing which is a prerequisite for volume
  multiattach support. For Neutron this means improved coordination of 
tasks
  like during live migration. John Garbutt has already started working 
with the

  Neutron team on this.

Like in Newton, I would like to restrict spec reviews mainly to re-approvals
early so that we can close out things that are nearly already ready to go
before accepting new work, especially with the shorter cycle.

I will continue to foster the subteams that are working and holding 
meetings in

Nova. I think that has been going well and helps focus efforts which do not
have a dedicated core involved on a daily basis. I want to also continue 
doing

regular status checks in the mailing list and recaps of events, be those
summit sessions or even hangout sessions, so that the entire team is on the
same page and we have clear communication.

This is definitely not a one person show and I owe a lot to the people 
working
in Nova every day. As I said in my Newton nomination I want to be PTL to 
help

manage the project forward in Ocata and keep the development team focused on
getting work done, but I definitely can not and will not do that alone.

Thanks for your consideration.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Bug smash day: Wed, Sep 14

2016-09-14 Thread Jim Rollenhagen
On Wed, Sep 14, 2016 at 5:52 AM, Dmitry Tantsur  wrote:
> On 09/12/2016 08:00 PM, Dmitry Tantsur wrote:
>>
>> Hi all!
>>
>> On Wednesday, Sep 14, 2016 we will try to clean up our bug list, triage,
>> separate RFEs from real bugs, etc. Then we'll find out which bugs must
>> be fixed for Newton, and, well, fix them :)
>>
>> We are starting as soon as the first person joins and stop as soon as
>> the last person drops. We will coordinate our efforts on the
>> #openstack-ironic channel on Freenode. Please feel free to join at your
>> convenience.
>>
>> See you there!
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> We're starting!
>
> Let's use this etherpad to coordinate what we are doing:
> https://etherpad.openstack.org/p/ironic-bug-smash

Most people have dropped off by now. This was a smashing success! (heh)

We went from 255 to 185 bugs in ironic, mostly by cleaning up a bunch of things.

I also reviewed all RFEs again and got an idea of where we're at with those. We
approved some, but most need a spec or have a spec in review. Maybe we can
do a spec review jam after Newton is out the door. :)

Thanks to everyone that helped!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Stack trace returned to end user bug

2016-09-14 Thread Aimee Ukasick
Continuing my investigation:

1) I used an "older" installation of Congress on OpenStack.
The oslo.messaging version is 4.6.2.dev9, whereas the version on my
brand new DevStack installation is 5.10.0.
I ran the same CLI policy commands and no traceback was returned.

2) With a new devstack installation and the latest Congress code,
confirmed that API and CLI calls to datasource
and schema do not return traceback when invalid IDs are passed.
"datasource" appears to be the only one of the three displaying
user-friendly error information as expected.

3) The error/not found handling bug seems to be across all the Policy
CLI and API calls.

I put more info and examples into the Launchpad bug entry:
https://bugs.launchpad.net/congress/+bug/1620868

(This is a bug where I would really love to be able to remotely debug
using PyCharm instead
of the pdb command line debugger.)

I don't know oslo.messaging or the Congress code base well enough to
know exactly why the Policy error
response is different from datasource and schema. I will continue to
dig through the code and hopefully
find a solution.

aimee

On Sun, Sep 11, 2016 at 2:30 PM, Tim Hinrichs  wrote:
> The Api node on Oslo-messaging sends a request to the policy node asking for
> say policy alpha. The policy node raises an exception, which Oslo returns to
> the API node. I'd imagine there is some way to have the Ali node extract the
> original exception or tell Oslo to return the only the original exception.
>
> Tim
> On Thu, Sep 8, 2016 at 2:04 PM Aimee Ukasick 
> wrote:
>>
>> All --
>>
>> https://bugs.launchpad.net/congress/+bug/1620868
>>
>> I stepped through the code with the pdb. I can't find anything wrong
>> in the CongressException
>> code.
>>
>> The traceback is being added by oslo_messaging/rpc/server.py  in
>> _process_incoming
>> This call that is throwing an exception: res =
>> self.dispatcher.dispatch(message) but I haven't
>> determined why.
>>
>> https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/server.py#L134
>>
>> https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/server.py#L142
>>
>> console output here:  http://paste.openstack.org/show/569334/
>>
>>
>> I can't figure out why oslo.messaging is throwing an exception, I
>> assume I should go into the API and CLI cod and prevent the traceback
>> from being displayed.
>>
>> Thoughts? Suggestions? Epiphanies?
>>
>> aimee
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ocata Design Summit - Proposed slot allocation

2016-09-14 Thread Emilien Macchi
On Wed, Sep 14, 2016 at 1:51 PM,   wrote:
> Also if you don't plan to use all of your
>>
>> allocated slots, let us know so that we can propose them to other teams.
>>
> Just so that we are not forgotten (in case there is some space left), the
> storlets dev team would greatly appreciate 2fb and 2-3wr.
> Many thanks in advance!
> Eran

In PuppetOpenStack we have:
1fb 2wr

And I'm really not sure we'll use them all:
https://etherpad.openstack.org/p/ocata-puppet

As you can see, we don't have much topics now, so I'm quite sure we
could give one wr to you.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] focus and Newton release

2016-09-14 Thread Emilien Macchi
Folks,

We're currently dealing with multiple CI issues, bug almost all
blockers are under control.
Until the release, the whole team should focus on what is targeted here:
https://launchpad.net/tripleo/+milestone/newton-rc1
https://launchpad.net/tripleo/+milestone/newton-rc2

1) Finish to land patches that are in Newton blueprints: Custom roles,
Ops tools and Manila/CephFS are the top priorities at this time.
2) Work on newton bugs, critical and high. Note: everything that
concerns Upgrades is critical or high, so it's very important to
finish this work.

Because of remaining work, we won't tag RC1 this week but wait the
week of September 26. That week, we'll tag RC1 and stable/newton will
be created. Hopefully by this time we solved most of our blockers and
merged the last patches related to Newton features.
>From there, we'll have to backport bugfixes (and upgrade work) from
master to stable/newton.

See more details about release schedule:
https://releases.openstack.org/newton/schedule.html#n-finalrc

Any feedback or question is more than welcome,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Extending python-dracclient to fetch System/iDrac resources

2016-09-14 Thread Anish Bhatt
That would be pretty easy to do. I did not pick InstanceId simply because
the code wasn't using it right now. Whatever the choice, the chosen key is
simply used as a key in the results dict so all of this would be
transparent to the users anyways.
-Anish

On Wed, Sep 14, 2016 at 11:33 AM,  wrote:

> I vote for parsing everything the same way.
>
>
>
> Note that the unique identifier for these attributes is InstanceID.  See
> https://www.vmware.com/support/developer/cim-sdk/
> smash/u2/ga/apirefdoc/CIM_BIOSString.html for example.  I think it’s fine
> to include GroupID and whatever attributes are needed, but InstanceID
> should be used as the key.  We shouldn’t be making up other keys to use
> since we already have a perfectly good one that’s being supplied.
>
>
>
> Chris
>
> -Original Message-
> From: Miles Gould [mailto:mgo...@redhat.com]
> Sent: Wednesday, September 14, 2016 7:46 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ironic] Extending python-dracclient to fetch
> System/iDrac resources
>
> On 13/09/16 20:30, Anish Bhatt wrote:
> > Is parsing iDrac/System attributes differently from BIOS attributes
> > the correct approach here (this will also make it match racadm
> > output), or should I be changing all Attributes to be parsed the same
> way ?
>
> "Parse everything the same way" sounds like the simpler and less brittle
> option; is there a good reason *not* to consider GroupID for BIOS
> attributes?
>
> Miles
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
One socket to bind them all
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Extending python-dracclient to fetch System/iDrac resources

2016-09-14 Thread Christopher.Dearborn
I vote for parsing everything the same way.

Note that the unique identifier for these attributes is InstanceID.  See 
https://www.vmware.com/support/developer/cim-sdk/smash/u2/ga/apirefdoc/CIM_BIOSString.html
 for example.  I think it's fine to include GroupID and whatever attributes are 
needed, but InstanceID should be used as the key.  We shouldn't be making up 
other keys to use since we already have a perfectly good one that's being 
supplied.

Chris

-Original Message-
From: Miles Gould [mailto:mgo...@redhat.com]
Sent: Wednesday, September 14, 2016 7:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ironic] Extending python-dracclient to fetch 
System/iDrac resources

On 13/09/16 20:30, Anish Bhatt wrote:
> Is parsing iDrac/System attributes differently from BIOS attributes
> the correct approach here (this will also make it match racadm
> output), or should I be changing all Attributes to be parsed the same way ?

"Parse everything the same way" sounds like the simpler and less brittle 
option; is there a good reason *not* to consider GroupID for BIOS attributes?

Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]tempest test case for force detach volume

2016-09-14 Thread Ken'ichi Ohmichi
Hi Chaoyi,

That is a nice point.
Now Tempest have tests for some volume v2 action APIs which doesn't
contain os-force_detach.
The available APIs of tempest are two: os-set_image_metadata and
os-unset_image_metadata like
https://github.com/openstack/tempest/blob/master/tempest/services/volume/v2/json/volumes_client.py#L27
That is less than I expected by comparing the API reference.

The corresponding API tests' patches are welcome if interested in :-)

Thanks
Ken Ohmichi

---


2016-09-13 17:58 GMT-07:00 joehuang :
> Hello,
>
> Is there ant tempest test case for "os-force_detach" action to force detach
> a volume? I didn't find such a test case both in the repository
> https://github.com/openstack/cinder/tree/master/cinder/tests/tempest
> and https://github.com/openstack/tempest
>
> The API link is:
> http://developer.openstack.org/api-ref-blockstorage-v2.html#forcedetachVolume
>
> Best Regards
> Chaoyi Huang(joehuang)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ocata Design Summit - Proposed slot allocation

2016-09-14 Thread eran

Also if you don't plan to use all of your

allocated slots, let us know so that we can propose them to other teams.

Just so that we are not forgotten (in case there is some space left),  
the storlets dev team would greatly appreciate 2fb and 2-3wr.

Many thanks in advance!
Eran


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Sean Dague
On 09/14/2016 12:06 PM, Roman Podoliaka wrote:
> Hmm, looks like we now run more testr workers in parallel (8 instead of 4):
> 
> http://logs.openstack.org/76/335676/7/check/gate-nova-python34-db/6841fce/console.html.gz
> http://logs.openstack.org/62/369862/3/check/gate-nova-python27-db-ubuntu-xenial/2784de9/console.html
> 
> On my machine running Nova migration tests against MySQL is much
> slower with 8 workers than with 4 due to disk IO (it's HDD). When they
> time out (after 320s) I see the very same TimeoutException and
> IndexError (probably something messes up with TimeoutException up the
> stack).

Yes, by default testr runs with the number of workers matching the # of
cpus on the target. I think all our cloud providers are now 8 cpu
guests. So unit / functional tests are running 8 way. That's been true
for quite a while IIRC.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for Ocata cycle

2016-09-14 Thread Antoni Segura Puimedon
On Mon, Sep 12, 2016 at 7:04 PM, Steven Dake (stdake)  wrote:
> To the OpenStack Community,
>
>
>
> Consider this email my self non-nomination for PTL of Kolla for
>
> the coming Ocata release.  I let the team know in our IRC team meeting
>
> several months ago I was passing the on baton at the conclusion of Newton,
>
> but I thought the broader OpenStack community would appreciate the
> information.
>
>
>
> I am super proud of what our tiny struggling community produced starting
>
> 3 years ago with only 3 people to the strongly emergent system that is Kolla
>
> with over 467 total contributors [1] since inception and closing in on 5,000
>
> commits today.
>
>
>
> In my opinion, the Kolla community is well on its way to conquering the last
>
> great challenge OpenStack faces: Making operational deployment management
> (ODM)
>
> of OpenStack cloud platforms straight-forward, easy, and most importantly
>
> cost effective for the long term management of OpenStack.
>
>
>
> The original objective the Kolla community set out to accomplish, deploying
>
> OpenStack in containers at 100 node scale has been achieved as proven by
> this
>
> review [2].  In these 12 scenarios, we were able to deploy with 3
>
> controllers, 100 compute nodes, and 20 storage nodes using Ceph for all
>
> storage and run rally as well as tempest against the deployment.
>
>
>
> Kolla is _No_Longer_a_Toy_ and _has_not_been_ since Liberty 1.1.0.
>
>
>
> I have developed a strong leadership pipeline and expect several candidates
>
> to self-nominate.  I wish all of them the best in the future PTL elections.
>
>
>
> Finally, I would like to thank all of the folks that have supported Kolla’s
>
> objectives.  If I listed the folks individually this email would be far too
>
> long, but you know who you are J Thank you for placing trust in my
> judgement.

Thank you Steven! You and the Kolla people have always been around when
Kuryr needed help and/or guidance.

We appreciate it a lot.
>
>
>
> It has been a pleasure to serve as your leader.
>
>
>
> Regards
>
> -steak
>
>
>
> [1] http://stackalytics.com/report/contribution/kolla-group/2000
>
> [2] https://review.openstack.org/#/c/352101/
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Sean Dague
On 09/14/2016 11:23 AM, Thomas Goirand wrote:
> On 09/14/2016 03:15 PM, Sean Dague wrote:
>> I noticed the following issues happening quite often now in the
>> opportunistic db tests for nova -
>> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>>
>>
>> It looks like some race has been introduced where the various db
>> connections are not fully isolated from each other like they used to be.
>> The testing magic for this is buried pretty deep in oslo.db.
>>
>> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
>> think we need to strongly consider blocking it and revisiting these
>> issues post newton.
>>
>>  -Sean
> 
> Blocking a version of oslo.db just because of your 6th sense tells you
> it may have introduce some issues isn't the best way to decide. We need
> a better investigation than just this, and find the root cause.
> 
> Cheers,
> 
> Thomas Goirand (zigo)

Sure, but investigations all start with hypothesis, which is then proved
/ disproved, and we move on. Given that we're in RC week, and we've had
library releases after the freeze point, it's reasonable to see if
rolling back to previous versions mitigate things.

Root cause analysis may take more time, and need different folks, than
the RC period allows. So given imperfect information we need to consider
what the safest path forward is.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-14 Thread Terry Wilson
On Sep 13, 2016 10:42 PM, "Kevin Benton"  wrote:
>
> >All performance matters. All memory consumption matters. Being wasteful
over a purely aesthetic few extra characters of code is silly.
>
> Isn't the logical conclusion of this to write everything in a different
language? :)

I'm up for it if you are. :D
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] *ExtraConfig, backwards compatibility & deprecation

2016-09-14 Thread Giulio Fidente

On 09/14/2016 05:59 PM, Giulio Fidente wrote:

On 09/14/2016 02:31 PM, Steven Hardy wrote:

Related to this is the future of all of the per-role customization
interfaces.  I'm thinking these don't really make sense to maintain
long-term now we have the new composable services architecture, and it
would be better if we can deprecate them and move folks towards the
composable services templates instead?


my experience is that the ExtraConfig interfaces have been useful to
provide arbitrary hiera and class includes

I wonder if we could ship by default some roles parsing those parameters?


thinking more about it, the *ExtraConfig interfaces also offer a simple 
mechanism to *override* any hiera setting we push via the templates ... 
which isn't easy to achieve with roles


a simple short-term solution could be to merge ExtraConfig in the $role 
mapped_data, thoughts?


while to move in a more container-aware condition we could probably have 
some $serviceExtraConfig param mapped into each service?

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] PTL candidacy

2016-09-14 Thread Alex Schultz
I would like to nominate myself for the PTL role in the Puppet Openstack
Team
for the Ocata release cycle.



I have been involved in the puppet team since Liberty as both a contributor
and
 a downstream consumer.  It has been a pleasure working with the community
to
improve and make the Puppet Openstack projects one of the most mature ways
to
deploy OpenStack.



For the Ocata cycle, I believe we have a few places to continue to focus.

- Continuing the work that Emilien has promoted, we need to continue to
focus
  on CI integration both upstream and downstream.

- We need to ensure that the new modules in the project continue to mature
and
  are included in CI.

- Improving documentation around established patterns and best practices.

- Ensuring the modules are updated for Ocata changes.



I look forward to working with all of you. I'm open to any suggestions or
ideas
for additional places to improve our processes and modules.



Thanks,

Alex Schultz

irc: mwhahaha

https://review.openstack.org/370279
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] warning about PBR issue for kolla operators

2016-09-14 Thread Doug Hellmann
Excerpts from Steven Dake (stdake)'s message of 2016-09-13 23:33:04 +:
> Hey folks,
> 
> The quickstart guide was modified as a result of a lot of painful debugging 
> over the last cycle approximately a month ago.  The only solution available 
> to us was to split the workflow into an operator workflow (working on stable 
> branches) and a developer workflow (working on master).  We recognize 
> operators are developers and the docs indicate as much.  Many times operators 
> want to work with master as they are evaluating Newton and planning to place 
> it into production.
> 
> I’d invite folks using master with the pip install ./ method to have a 
> re-read of the quickstart documentation. The documentation was changed in 
> subtle ways (with warning and info boxes) but folks that have been using 
> Kolla prior to the quckstart change may be using kolla in the same way the 
> quickstart previously recommended.  Folks tend to get jammed up on this issue 
> – we have helped 70-100 people work past this problem before we finally 
> sorted out a workable solution (via documentation).
> 
> The real issue lies in how PBR operates and pip interacts with Kolla and is 
> explained in the quickstart.  From consulting with Doug Hellman and others in 
> the release team, it appears the issue that impacts Kolla is not really 
> solveable within PBR itself.  (I don’t mean to put words in Doug’s mouth, but 
> that is how I parsed our four+ hour discussion) on the topic.
> 
> The documentation is located here:
> http://docs.openstack.org/developer/kolla

It has been a while since that conversation, but IIRC the issue was
with installing kolla from packages and then also trying to use the
git version (or trying to use the git version instead) without
updating the package metadata. For most projects this doesn't matter,
but kolla uses its version number for some internal logic (I don't
remember those details), which makes it more sensitive to having
the wrong version than other projects might be.

PBR has a 2 phase lookup for versions. It first uses setuptools to try
to get the value from the metadata. If that fails, it then tries to get
the value from a local git repository, using the tags. So, after the
first kolla package is installed there is always metadata available,
even if it's wrong.

It's not clear that you need to tell people not to pip install from
source, so much as that they need to uninstall one version before
installing a new one, so that the metadata is correct. But if the
instructions you have now are working, that's the true test.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Cancelled: IRC meeting 0800UTC Sep. 16 2016

2016-09-14 Thread jason
Because of holidays in China, it's probably safer to cancel the meeting.

-- 
Yours,
Jason

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] governance proposal worth a visit: Write down OpenStack principles

2016-09-14 Thread Thierry Carrez
Thierry Carrez wrote:
> Doug Hellmann wrote:
>> Excerpts from Jay Pipes's message of 2016-09-09 14:30:29 -0400:
>>>  > To me, this statement
 about One OpenStack is about emphasizing those commonalities and
 working together to increase them, with the combined goals of
 improving the user and operator experience of using OpenStack and
 improving our own experience of making it.
>>>
>>> +1000 to the above, and I don't believe anything about my stance that 
>>> OpenStack should be a cloud toolkit goes against that.
>>>
>>> The wording/philosophy that I disagree with is the "one product" thing :)
>>
>> Tomato, tomato.
>>
>> We're all, I think, looking at this "One OpenStack" principle from
>> different perspectives.  You say "a toolkit". I say "a project".
>> Thierry said "a product". The important word in all of those phrases
>> is "a" -- as in singular.
> 
> FWIW I agree with Jay that the wording "a product" is definitely
> outdated and does not represent the current reality. "Product"
> presupposes a level of integration that we never achieved, and which is,
> in my opinion, not desirable at this stage. I think that saying "a
> framework" would be more accurate today. Something like "OpenStack is
> one community with one common mission, producing one framework of
> collaborating components" would capture my thinking.

I just pushed a new revision that incorporates many suggested wording
changes, including this one.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Roman Podoliaka
Hmm, looks like we now run more testr workers in parallel (8 instead of 4):

http://logs.openstack.org/76/335676/7/check/gate-nova-python34-db/6841fce/console.html.gz
http://logs.openstack.org/62/369862/3/check/gate-nova-python27-db-ubuntu-xenial/2784de9/console.html

On my machine running Nova migration tests against MySQL is much
slower with 8 workers than with 4 due to disk IO (it's HDD). When they
time out (after 320s) I see the very same TimeoutException and
IndexError (probably something messes up with TimeoutException up the
stack).

On Wed, Sep 14, 2016 at 6:44 PM, Mike Bayer  wrote:
>
>
> On 09/14/2016 11:08 AM, Mike Bayer wrote:
>>
>>
>>
>> On 09/14/2016 09:15 AM, Sean Dague wrote:
>>>
>>> I noticed the following issues happening quite often now in the
>>> opportunistic db tests for nova -
>>>
>>> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>>>
>>>
>>>
>>> It looks like some race has been introduced where the various db
>>> connections are not fully isolated from each other like they used to be.
>>> The testing magic for this is buried pretty deep in oslo.db.
>>
>>
>> that error message occurs when a connection that is intended against a
>> SELECT statement fails to provide a cursor.description attribute.  It is
>> typically a driver-level bug in the MySQL world and corresponds to
>> mis-handled failure modes from the MySQL connection.
>>
>> By "various DB connections are not fully isolated from each other" are
>> you suggesting that a single in-Python connection object itself is being
>> shared among multiple greenlets?   I'm not aware of a change in oslo.db
>> that would be a relationship to such an effect.
>
>
> So, I think by "fully isolated from each other" what you really mean is
> "operations upon a connection are not fully isolated from the subsequent use
> of that connection", since that's what I see in the logs.  A connection is
> attempting to be used during teardown to drop tables, however it's in this
> essentially broken state from a PyMySQL perspective, which would indicate
> something has gone wrong with this (pooled) connection in the preceding test
> that could not be detected or reverted once the connection was returned to
> the pool.
>
> From Roman's observation, it looks like a likely source of this corruption
> is a timeout that is interrupting the state of the PyMySQL connection.   In
> the preceding stack trace, PyMySQL is encountering a raise as it attempts to
> call "self._sock.recv_into(b)", and it seems like some combination of
> eventlet's response to signals and the fixtures.Timeout() fixture is the
> cause of this interruption.   As an additional wart, something else is
> getting involved and turning it into an IndexError, I'm not sure what that
> part is yet though I can imagine that might be SQLAlchemy mis-interpreting
> what it expects to be a PyMySQL exception class, since we normally look
> inside of exception.args[0] to get the MySQL error code.   With a blank
> exception like fixtures.TimeoutException, .args is the empty tuple.
>
> The PyMySQL connection is now in an invalid state and unable to perform a
> SELECT statement correctly, but the connection is not invalidated and is
> instead returned to the connection pool in a broken state.  So the
> subsequent teardown, if it uses this same connection (which is likely),
> fails because the connection has been interrupted in the middle of its work
> and not given the chance to clean up.
>
> Seems like the use of fixtures.Timeout() fixture here is not organized to
> work with a database operation in progress, especially an
> eventlet-monkeypatched PyMySQL.   Ideally, if something like a timeout due
> to a signal handler occurs, the entire connection pool should be disposed
> (quickest way, engine.dispose()), or at the very least (and much more
> targeted), the connection that's involved should be invalidated from the
> pool, e.g. connection.invalidate().
>
> The change to the environment here would be that this timeout is happening
> at all - the reason for that is not yet known.   If oslo.db's version were
> involved in this error, I would guess that it would be related to this
> timeout condition being caused, and not anything to do with the connection
> provisioning.
>
>
>
>
>
>>
>>
>>
>>>
>>> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
>>> think we need to strongly consider blocking it and revisiting these
>>> issues post newton.
>>>
>>> -Sean
>>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [TripleO] *ExtraConfig, backwards compatibility & deprecation

2016-09-14 Thread Giulio Fidente

On 09/14/2016 02:31 PM, Steven Hardy wrote:

Related to this is the future of all of the per-role customization
interfaces.  I'm thinking these don't really make sense to maintain
long-term now we have the new composable services architecture, and it
would be better if we can deprecate them and move folks towards the
composable services templates instead?


my experience is that the ExtraConfig interfaces have been useful to 
provide arbitrary hiera and class includes


I wonder if we could ship by default some roles parsing those parameters?
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Mike Bayer



On 09/14/2016 11:08 AM, Mike Bayer wrote:



On 09/14/2016 09:15 AM, Sean Dague wrote:

I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22



It looks like some race has been introduced where the various db
connections are not fully isolated from each other like they used to be.
The testing magic for this is buried pretty deep in oslo.db.


that error message occurs when a connection that is intended against a
SELECT statement fails to provide a cursor.description attribute.  It is
typically a driver-level bug in the MySQL world and corresponds to
mis-handled failure modes from the MySQL connection.

By "various DB connections are not fully isolated from each other" are
you suggesting that a single in-Python connection object itself is being
shared among multiple greenlets?   I'm not aware of a change in oslo.db
that would be a relationship to such an effect.


So, I think by "fully isolated from each other" what you really mean is 
"operations upon a connection are not fully isolated from the subsequent 
use of that connection", since that's what I see in the logs.  A 
connection is attempting to be used during teardown to drop tables, 
however it's in this essentially broken state from a PyMySQL 
perspective, which would indicate something has gone wrong with this 
(pooled) connection in the preceding test that could not be detected or 
reverted once the connection was returned to the pool.


From Roman's observation, it looks like a likely source of this 
corruption is a timeout that is interrupting the state of the PyMySQL 
connection.   In the preceding stack trace, PyMySQL is encountering a 
raise as it attempts to call "self._sock.recv_into(b)", and it seems 
like some combination of eventlet's response to signals and the 
fixtures.Timeout() fixture is the cause of this interruption.   As an 
additional wart, something else is getting involved and turning it into 
an IndexError, I'm not sure what that part is yet though I can imagine 
that might be SQLAlchemy mis-interpreting what it expects to be a 
PyMySQL exception class, since we normally look inside of 
exception.args[0] to get the MySQL error code.   With a blank exception 
like fixtures.TimeoutException, .args is the empty tuple.


The PyMySQL connection is now in an invalid state and unable to perform 
a SELECT statement correctly, but the connection is not invalidated and 
is instead returned to the connection pool in a broken state.  So the 
subsequent teardown, if it uses this same connection (which is likely), 
fails because the connection has been interrupted in the middle of its 
work and not given the chance to clean up.


Seems like the use of fixtures.Timeout() fixture here is not organized 
to work with a database operation in progress, especially an 
eventlet-monkeypatched PyMySQL.   Ideally, if something like a timeout 
due to a signal handler occurs, the entire connection pool should be 
disposed (quickest way, engine.dispose()), or at the very least (and 
much more targeted), the connection that's involved should be 
invalidated from the pool, e.g. connection.invalidate().


The change to the environment here would be that this timeout is 
happening at all - the reason for that is not yet known.   If oslo.db's 
version were involved in this error, I would guess that it would be 
related to this timeout condition being caused, and not anything to do 
with the connection provisioning.











Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Thomas Goirand
On 09/14/2016 03:15 PM, Sean Dague wrote:
> I noticed the following issues happening quite often now in the
> opportunistic db tests for nova -
> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
> 
> 
> It looks like some race has been introduced where the various db
> connections are not fully isolated from each other like they used to be.
> The testing magic for this is buried pretty deep in oslo.db.
> 
> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
> think we need to strongly consider blocking it and revisiting these
> issues post newton.
> 
>   -Sean

Blocking a version of oslo.db just because of your 6th sense tells you
it may have introduce some issues isn't the best way to decide. We need
a better investigation than just this, and find the root cause.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-09-14 Thread Andrew Woodward
Great to hear, nice work all.

On Wed, Sep 14, 2016 at 7:56 AM Giulio Fidente  wrote:

> On 08/30/2016 06:40 PM, Giulio Fidente wrote:
> > Together with Keith we're working on some patches to integrate (via
> > puppet-ceph) the deployment of Ceph RGW in TripleO as a composable
> > service which can optionally replace SwiftProxy
> >
> >
> > Changes are tracked via blueprint at:
> >
> > https://blueprints.launchpad.net/tripleo/+spec/ceph-rgw-integration
> >
> > They should be tagged with the appropriate topic branch, so can be found
> > with:
> >
> > https://review.openstack.org/#/q/topic:bp/ceph-rgw-integration,n,z
> >
> >
> > There is also a [NO MERGE] change which we use to test the above in
> > upstream CI:
> >
> > https://review.openstack.org/#/c/357182/
> >
> >
> > We'd like to formally request an FFE for this feature.
> >
> > Thanks for consideration, feedback, help and reviews :)
>
> a quick update,
>
> the last submission needed for this feature has been merged today,
> thanks to all who helped
>
> from the RC release it will be possible to use ceph/rgw as a swift
> drop-in replacement for those deploying ceph
> --
> Giulio Fidente
> GPG KEY: 08D733BA | IRC: gfidente
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
--
Andrew Woodward
Mirantis
Fuel Community Ambassador
Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Mike Bayer



On 09/14/2016 09:15 AM, Sean Dague wrote:

I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22


It looks like some race has been introduced where the various db
connections are not fully isolated from each other like they used to be.
The testing magic for this is buried pretty deep in oslo.db.


that error message occurs when a connection that is intended against a 
SELECT statement fails to provide a cursor.description attribute.  It is 
typically a driver-level bug in the MySQL world and corresponds to 
mis-handled failure modes from the MySQL connection.


By "various DB connections are not fully isolated from each other" are 
you suggesting that a single in-Python connection object itself is being 
shared among multiple greenlets?   I'm not aware of a change in oslo.db 
that would be a relationship to such an effect.






Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] cinder volume lxc and iscsi

2016-09-14 Thread Fabrice Grelaud
Hi,

i need recommendations to setup block storage with dell storage center
iscsi drivers.

As seen in doc
(http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-cinder.html),
no need for ISCSI block storage to have a separate host.
So, i modify env.d/cinder.yml to remove "is_metal: true", and configure
openstack_user_config.yml with:
(http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/dell-storagecenter-driver.html)

storage_hosts:
  p-osinfra01:
ip: 172.29.236.11
container_vars:
  cinder_storage_availability_zone: Dell_SC
  cinder_default_availability_zone: Dell_SC
  cinder_default_volume_type: delliscsi
  cinder_backends:
limit_container_types: cinder_volume
delliscsi:
  volume_driver:
cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
  volume_backend_name: dell_iscsi
  san_ip: 172.x.y.z
  san_login: admin
  san_password: 
  iscsi_ip_address: 10.a.b.c
  dell_sc_ssn: 46247
  dell_sc_api_port: 3033
  dell_sc_server_folder: Openstack
  dell_sc_volume_folder: Openstack
  iscsi_port: 3260

Same for p-osinfra02 and p-osinfra03.

I launch playbook os-cinder-install.yml and i have 3 cinder-volume
containers each on my infra hosts.
Everything is ok.

In horizon, i can create a volume (seen on the storage center) and can
attach this volume to an instance. Perfect !

But now, if i launch an instance with "Boot from image (create a new
volume)", i got an error from nova "Block Device Mapping is Invalid".
I checked my cinder-volume.log and i see:
ERROR cinder.volume.flows.manager.create_volume
FailedISCSITargetPortalLogin: Could not login to any iSCSI portal
ERROR cinder.volume.manager ImageCopyFailure: Failed to copy image to
volume: Could not login to any iSCSI portal.

I test in one container iscsi connection:
root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m
discovery -t sendtargets -p 10.a.b.c
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a724
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a728
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a723
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a727

But when login, i got:
root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m node
-T iqn.2002-03.com.compellent:5000d31000b4a724 --login
Logging in to [iface: default, target:
iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260]
(multiple)
iscsiadm: got read error (0/0), daemon died?
iscsiadm: Could not login to [iface: default, target:
iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260].
iscsiadm: initiator reported error (18 - could not communicate to iscsid)
iscsiadm: Could not log into all portals

I found in google a bug for use of open-iscsi inside lxc-container
(https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), a bug
commented by Kevin Carter (openstack-ansible core team) as a "blocking
issue" (in may 2015).

Is that bug still relevant ?
Do i need to rather deploy my cinder-volume on compute host (metal) to
solve my problem ?
Or do you have others suggestions ?

Thanks.
Regards,

-- 
Fabrice Grelaud
Université de Bordeaux


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] No cellsv2 meeting today

2016-09-14 Thread Andrew Laski
Since we're closing in on RC1 which needs peoples attention and nothing
cells related affects that we're going to skip the cells meeting today.
The next planned meeting is September 28th at 1700UTC. Thanks.

-Andrew

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] PTL candidacy

2016-09-14 Thread Dmitry Tantsur

Hi all!

I am announcing my candidacy for PTL for the Ironic team for the Ocata release
cycle. In case you don't know me, I'm dtantsur on IRC. I started working on
Ironic around late spring or summer 2014, and I'm probably best known as a
founder of ironic-inspector sub-project.

The Newton cycle was a huge breakthrough for the Bare metal project. Testing
upgrades in CI, multi-tenant networking, support for multiple compute services
out of box are just a few great and highly-expected things that have become the
reality during this cycle.

As for Ocata, I would like to concentrate on unfinished tasks, CI and
technical dept. To be more precise, if you happen to choose me as your new PTL
for the Ocata cycle, I would like to shift focus to and be the driving force
behind these fields of improvement:

* Driver improvements and unification

  First of all, this comprises the driver composition reform - my long-term
  commitment which I finally plan to fulfill this cycle. However, I would also
  like us to think more about unification between drivers. Several non-core
  things very from driver to driver: UEFI support, RAID support and
  capabilities discovery immediately come to my mind as examples.

  Finally, this involves parting with drivers that do not have 3rd party CI.
  I am ready to share responsibility for this unpopular move :)

* CI improvements

  The number of our jobs keeps growing, but we still don't cover a lot of
  features that Ironic provides. We need to figure out the way to increase
  coverage without increasing load on infra and number of transient failures.
  A few approaches come to my mind. Using projects like TripleO as 3rdparty CI
  may cover a few more things specific to its use case. Puppet experience with
  "scenarios" may also come in handy to replace several jobs with one.

  No matter which approach we take, we will need to make sure that it's still
  clear (especially to newcomers) what caused a particular CI failure.

* Finishing Newton goals

  We've done great job landing features in Newton, but a few important things
  are still missing. For example, booting from volume, rescue mode, port
  groups, rolling upgrades, dealing with automatic maintenance in a more robust
  way, and some more.

Cheers,
Dmitry

P.S.
Submitted at https://review.openstack.org/370147

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-09-14 Thread Giulio Fidente

On 08/30/2016 06:40 PM, Giulio Fidente wrote:

Together with Keith we're working on some patches to integrate (via
puppet-ceph) the deployment of Ceph RGW in TripleO as a composable
service which can optionally replace SwiftProxy


Changes are tracked via blueprint at:

https://blueprints.launchpad.net/tripleo/+spec/ceph-rgw-integration

They should be tagged with the appropriate topic branch, so can be found
with:

https://review.openstack.org/#/q/topic:bp/ceph-rgw-integration,n,z


There is also a [NO MERGE] change which we use to test the above in
upstream CI:

https://review.openstack.org/#/c/357182/


We'd like to formally request an FFE for this feature.

Thanks for consideration, feedback, help and reviews :)


a quick update,

the last submission needed for this feature has been merged today, 
thanks to all who helped


from the RC release it will be possible to use ceph/rgw as a swift 
drop-in replacement for those deploying ceph

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for Ocata cycle

2016-09-14 Thread Steven Dake (stdake)
Jeffrey,

The broad Kolla team made Kolla come true.  Thank you for the kind words; they 
means a lot coming from you. ☺

Regards
-steve

On 9/13/16, 8:41 PM, "Jeffrey Zhang"  wrote:

Thank Steve for all you do to make Kolla come true.

On Wed, Sep 14, 2016 at 10:19 AM, Vikram Hosakote (vhosakot)
 wrote:
> Thanks a lot Steve for being a great PTL, leader and a mentor!
>
> Regards,
> Vikram Hosakote
> IRC:  vhosakot
>
> From: "Steven Dake (stdake)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Monday, September 12, 2016 at 1:04 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL 
for
> Ocata cycle
>
> To the OpenStack Community,
>
>
>
> Consider this email my self non-nomination for PTL of Kolla for
>
> the coming Ocata release.  I let the team know in our IRC team meeting
>
> several months ago I was passing the on baton at the conclusion of Newton,
>
> but I thought the broader OpenStack community would appreciate the
> information.
>
>
>
> I am super proud of what our tiny struggling community produced starting
>
> 3 years ago with only 3 people to the strongly emergent system that is 
Kolla
>
> with over 467 total contributors [1] since inception and closing in on 
5,000
>
> commits today.
>
>
>
> In my opinion, the Kolla community is well on its way to conquering the 
last
>
> great challenge OpenStack faces: Making operational deployment management
> (ODM)
>
> of OpenStack cloud platforms straight-forward, easy, and most importantly
>
> cost effective for the long term management of OpenStack.
>
>
>
> The original objective the Kolla community set out to accomplish, 
deploying
>
> OpenStack in containers at 100 node scale has been achieved as proven by
> this
>
> review [2].  In these 12 scenarios, we were able to deploy with 3
>
> controllers, 100 compute nodes, and 20 storage nodes using Ceph for all
>
> storage and run rally as well as tempest against the deployment.
>
>
>
> Kolla is _No_Longer_a_Toy_ and _has_not_been_ since Liberty 1.1.0.
>
>
>
> I have developed a strong leadership pipeline and expect several 
candidates
>
> to self-nominate.  I wish all of them the best in the future PTL 
elections.
>
>
>
> Finally, I would like to thank all of the folks that have supported 
Kolla’s
>
> objectives.  If I listed the folks individually this email would be far 
too
>
> long, but you know who you are J Thank you for placing trust in my
> judgement.
>
>
>
> It has been a pleasure to serve as your leader.
>
>
>
> Regards
>
> -steak
>
>
>
> [1] http://stackalytics.com/report/contribution/kolla-group/2000
>
> [2] https://review.openstack.org/#/c/352101/
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Roman Podoliaka
Sean,

I'll take a closer look, but test execution times and errors look suspicious:

ironic.tests.unit.db.sqlalchemy.test_migrations.TestMigrationsPostgreSQL.test_walk_versions
60.002

2016-09-14 14:21:38.756421 |   File
"/home/jenkins/workspace/gate-ironic-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
line 62, in do_poll
2016-09-14 14:21:38.756435 | return self.poll.poll(seconds)
2016-09-14 14:21:38.756481 |   File
"/home/jenkins/workspace/gate-ironic-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
line 52, in signal_handler
2016-09-14 14:21:38.756494 | raise TimeoutException()
2016-09-14 14:21:38.756508 | IndexError: tuple index out of range

Like if the test case was forcibly stopped after timeout.

Thanks,
Roman

On Wed, Sep 14, 2016 at 4:15 PM, Sean Dague  wrote:
> I noticed the following issues happening quite often now in the
> opportunistic db tests for nova -
> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>
>
> It looks like some race has been introduced where the various db
> connections are not fully isolated from each other like they used to be.
> The testing magic for this is buried pretty deep in oslo.db.
>
> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
> think we need to strongly consider blocking it and revisiting these
> issues post newton.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Sean Dague
I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22


It looks like some race has been introduced where the various db
connections are not fully isolated from each other like they used to be.
The testing magic for this is buried pretty deep in oslo.db.

Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] *ExtraConfig, backwards compatibility & deprecation

2016-09-14 Thread Steven Hardy
Hi all,

I wanted to draw attention to this patch:

https://review.openstack.org/#/c/367295/

As part of the custom-roles work, I had to break backwards compatibility
for the OS::TripleO::AllNodesExtraConfig resource.

I'm not happy about that, but I couldn't find any way to avoid it if we
want to allow existing roles to be optional (such as removing the *Storage
role resources from the deployment completely).

The adjustments for any out-of-tree users should be simple, and I'm
planning to write a script to help folks migrate but we'll need to document
this in the release notes/docs (I'll write these).

Related to this is the future of all of the per-role customization
interfaces.  I'm thinking these don't really make sense to maintain
long-term now we have the new composable services architecture, and it
would be better if we can deprecate them and move folks towards the
composable services templates instead?

In particular, when moving to a fully containerized deployment using an
atomic host image, configuration of the host directly via these interfaces
will no longer be possible, so it will be necessary to get folks onto the
composable services interfaces ahead of such a move (as these will fit much
better with a container based deployment):

https://review.openstack.org/#/c/330659/

What do folks think about this?  I suspect there's going to be some work
required to achieve it, but a first step would be to convert all the
in-tree ExtraConfig examples to the new format & update the docs to show how
customizations via composable services would work.

Then later we can update the docs & mark these interfaces deprecated
(during Ocata).

Any thoughts appreciated, thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][FFE][keystone][release] block keystonemiddleware 4.0.0

2016-09-14 Thread Steve Martinelli
We're more than happy with that outcome, and may have already started the
patch ;) https://review.openstack.org/#/c/370011/

On Wed, Sep 14, 2016 at 5:10 AM, Ihar Hrachyshka 
wrote:

> Steve Martinelli  wrote:
>
> A bug was recently filed against keystone [1]. As of the Newton release we
>> depend on a class being public -- BaseAuthProtocol instead of
>> _BaseAuthProtocol [2]. Which was introduced in 4.1.0 [3].
>>
>> The current requirement for keystonemiddleware is:
>>   keystonemiddleware>=4.0.0,!=4.1.0,!=4.5.0
>>
>> Blocking 4.0.0 would logically make it:
>>   keystonemiddleware>=4.2.0,!=4.5.0
>>
>> I've pushed a patch to the requirements repo for this change [4]. I'd
>> like to know if blocking the lower value makes sense, I realize it's
>> advertised, but we're up to 4.9.0 now.
>>
>> Unfortunately, many projects depend on keystonemiddleware, but (luckily
>> ?) this should only be server side projects [5], most of which are going
>> through their RC period now.
>>
>
> I suggest instead keystone closes the gap on their side, by falling back
> to _BaseAuthProtocol class if public one is not present. No requirement
> updates, no delay in rc1, just some time for keystone folks to be aware
> that the private class in 4.0.+ series is to be considered kinda public for
> their own usage.
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Adding routes based on source address in neutron router

2016-09-14 Thread Abhilash Goyal
I am trying to add routes in neutron router based on source address, but
not able to find a valid approach.
Any inputs how to proceed ?
Thanks in advance.

-- 
Abhilash Goyal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Extending python-dracclient to fetch System/iDrac resources

2016-09-14 Thread Miles Gould

On 13/09/16 20:30, Anish Bhatt wrote:

Is parsing iDrac/System attributes differently from BIOS attributes the
correct approach here (this will also make it match racadm output), or
should I be changing all Attributes to be parsed the same way ?


"Parse everything the same way" sounds like the simpler and less brittle 
option; is there a good reason *not* to consider GroupID for BIOS 
attributes?


Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][nova] python-novaclient 3.3.2 release (mitaka)

2016-09-14 Thread no-reply
We are eager to announce the release of:

python-novaclient 3.3.2: Client library for OpenStack Compute API

This release is part of the mitaka stable release series.

With source available at:

https://git.openstack.org/cgit/openstack/python-novaclient

With package available at:

https://pypi.python.org/pypi/python-novaclient

Please report issues through launchpad:

https://bugs.launchpad.net/python-novaclient

For more details, please see below.

Changes in python-novaclient 3.3.1..3.3.2
-

76240a8 List system dependencies for running common tests
019d138 Fix nova host-evacuate for v2.14
451e7f5 Updated from global requirements


Diffstat (except docs and test files)
-

novaclient/v2/contrib/host_evacuate.py | 16 +---
other-requirements.txt | 23 +++
requirements.txt   |  2 +-
test-requirements.txt  |  4 ++--
tox.ini|  8 
6 files changed, 60 insertions(+), 6 deletions(-)


Requirements updates


diff --git a/other-requirements.txt b/other-requirements.txt
new file mode 100644
index 000..da3ab7f
--- /dev/null
+++ b/other-requirements.txt
@@ -0,0 +1,23 @@
+# This is a cross-platform list tracking distribution packages needed by tests;
+# see http://docs.openstack.org/infra/bindep/ for additional information.
+
+build-essential [platform:dpkg]
+dbus-devel [platform:rpm]
+dbus-glib-devel [platform:rpm]
+gettext
+language-pack-en [platform:ubuntu]
+libdbus-1-dev [platform:dpkg]
+libdbus-glib-1-dev [platform:dpkg]
+libffi-dev [platform:dpkg]
+libffi-devel [platform:rpm]
+libuuid-devel [platform:rpm]
+locales [platform:debian]
+python-dev [platform:dpkg]
+python-devel [platform:rpm]
+python3-all-dev [platform:ubuntu !platform:ubuntu-precise]
+python3-dev [platform:dpkg]
+python3-devel [platform:fedora]
+python3.4 [platform:ubuntu-trusty]
+python3.5 [platform:ubuntu-xenial]
+python34-devel [platform:centos]
+uuid-dev [platform:dpkg]
diff --git a/requirements.txt b/requirements.txt
index ae6170c..510ca43 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -14 +14 @@ six>=1.9.0 # MIT
-Babel>=1.3 # BSD
+Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index 6a301d0..ba3fa33 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ discover # BSD
-fixtures>=1.3.1 # Apache-2.0/BSD
+fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
@@ -11 +11 @@ mock>=1.2 # BSD
-python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
+python-keystoneclient!=1.8.0,!=2.1.0,<3.0.0,>=1.6.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Bug smash day: Wed, Sep 14

2016-09-14 Thread Dmitry Tantsur

On 09/12/2016 08:00 PM, Dmitry Tantsur wrote:

Hi all!

On Wednesday, Sep 14, 2016 we will try to clean up our bug list, triage,
separate RFEs from real bugs, etc. Then we'll find out which bugs must
be fixed for Newton, and, well, fix them :)

We are starting as soon as the first person joins and stop as soon as
the last person drops. We will coordinate our efforts on the
#openstack-ironic channel on Freenode. Please feel free to join at your
convenience.

See you there!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We're starting!

Let's use this etherpad to coordinate what we are doing: 
https://etherpad.openstack.org/p/ironic-bug-smash


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms]Running two haproxy-using units on same machine?

2016-09-14 Thread James Page
I can't find the provider/colocation document I wrote a while back (its
disappeared from the canonical wiki)

I'll re-write it in the charm-guide soon.

On Wed, 14 Sep 2016 at 10:03 Neil Jerram  wrote:

> Thanks James for this quick and clear answer!
>
> Neil
>
>
> On Tue, Sep 13, 2016 at 8:46 PM, James Page  wrote:
>
>> Hi Neil
>>
>> On Tue, 13 Sep 2016 at 20:43 Neil Jerram  wrote:
>>
>>> Should it be possible to run two OpenStack charm units, that both use
>>> haproxy to load balance their APIs, on the same machine?  Or is there some
>>> doc somewhere that says that a case like that should use separate machines?
>>>
>>> (I'm asking in connection with the bug report at
>>> https://bugs.launchpad.net/openstack-charm-testing/+bug/1622697.)
>>>
>>
>> No - that's not currently possible.  For example, if you try to place
>> both nova-cloud-controller and cinder units on the same machine, they both
>> assume sole control over haproxy.cfg and will happily trample each others
>> changes.
>>
>> There is a doc somewhere - I'll dig it out and add to the charm-guide on
>> docs.openstack.org.
>>
>> Solution: use a LXC or LXD container for each service, assuring sole
>> control of the filesystem for each charm, avoiding said conflict.
>>
>> Cheers
>>
>> James
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][FFE][keystone][release] block keystonemiddleware 4.0.0

2016-09-14 Thread Ihar Hrachyshka

Steve Martinelli  wrote:

A bug was recently filed against keystone [1]. As of the Newton release  
we depend on a class being public -- BaseAuthProtocol instead of  
_BaseAuthProtocol [2]. Which was introduced in 4.1.0 [3].


The current requirement for keystonemiddleware is:
  keystonemiddleware>=4.0.0,!=4.1.0,!=4.5.0

Blocking 4.0.0 would logically make it:
  keystonemiddleware>=4.2.0,!=4.5.0

I've pushed a patch to the requirements repo for this change [4]. I'd  
like to know if blocking the lower value makes sense, I realize it's  
advertised, but we're up to 4.9.0 now.


Unfortunately, many projects depend on keystonemiddleware, but (luckily  
?) this should only be server side projects [5], most of which are going  
through their RC period now.


I suggest instead keystone closes the gap on their side, by falling back to  
_BaseAuthProtocol class if public one is not present. No requirement  
updates, no delay in rc1, just some time for keystone folks to be aware  
that the private class in 4.0.+ series is to be considered kinda public for  
their own usage.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms]Running two haproxy-using units on same machine?

2016-09-14 Thread Neil Jerram
Thanks James for this quick and clear answer!

Neil


On Tue, Sep 13, 2016 at 8:46 PM, James Page  wrote:

> Hi Neil
>
> On Tue, 13 Sep 2016 at 20:43 Neil Jerram  wrote:
>
>> Should it be possible to run two OpenStack charm units, that both use
>> haproxy to load balance their APIs, on the same machine?  Or is there some
>> doc somewhere that says that a case like that should use separate machines?
>>
>> (I'm asking in connection with the bug report at
>> https://bugs.launchpad.net/openstack-charm-testing/+bug/1622697.)
>>
>
> No - that's not currently possible.  For example, if you try to place both
> nova-cloud-controller and cinder units on the same machine, they both
> assume sole control over haproxy.cfg and will happily trample each others
> changes.
>
> There is a doc somewhere - I'll dig it out and add to the charm-guide on
> docs.openstack.org.
>
> Solution: use a LXC or LXD container for each service, assuring sole
> control of the filesystem for each charm, avoiding said conflict.
>
> Cheers
>
> James
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Sept.14

2016-09-14 Thread joehuang
Hello, team,



Agenda of Sept.14 weekly meeting, let's continue the topics:


# PTL election

# progress review and concerns on the features like micro versions, policy 
control, dynamic pod binding, cross pod L2 networking

#  Tricircle splitting: https://etherpad.openstack.org/p/TricircleSplitting

# open discussion


How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Why not OAuth 2.0 provider?

2016-09-14 Thread Alexander Makarov
Sorry - lost some links :)

Unified delegation spec:
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/unified-delegation.html
About OAuth2:
https://hueniverse.com/2012/07/26/oauth-2-0-and-the-road-to-hell/

On Wed, Sep 14, 2016 at 10:58 AM, Alexander Makarov 
wrote:

> Actually OAuth support is my next step in "unified delegations" effort
> [0], so it's a good time to think about what version of it should be
> supported.
>
> Along with that I have some concerns about OAuth v2, as IIRC authors
> themselves abandoned the spec. I'll check if something changed since that
> time.
>
> On 13.09.2016 00:43, Steve Martinelli wrote:
>
> 
>
>>
>> Would you please shed some light on how to configure Keystone for OAuth1?
>> Thank you very much.
>>
>
> There is some documentation in the API but nothing formally written out:
> http://developer.openstack.org/api-ref/identity/v3-ext/index.html
>
>
>>
>> I am trying to develop OAuth 2 client for Keystone. We will contribute
>> our OAuth 2 client source code to the community if we can use
>> Google/Facebook to log in to OpenStack through OAuth 2 client.
>>
>>
> Currently you can setup keystone to work with Google / Facebook and other
> social logins. If you've setup keystone to use Shibboleth (which you did, I
> snipped that part of the message), then you can set it up to use these
> social logins as well. See documentation here: http://docs.openstack.
> org/developer/keystone/federation/federated_identity.html#id4
>
>
>> Thanks.
>>
>> Best regards,
>>
>> Winston Hong
>> Ottawa, Ontario
>> Canada
>>
>>
>> Steve Martinelli  Jun 27, 2016, 10:57 PM
>>
>> > So, the os-oauth routes you mention in the documentation do not make
>> > keystone a proper oauth provider. We simply perform delegation (one
>> user
>> > handing some level of permission on a project to another entity) with
>> the
>> > standard flow established in the oauth1.0b specification.
>> >
>> > Historically we chose oauth1.0 because one of the implementers was very
>> > much against a flow based on oauth2.0 (though the names are similar,
>> these
>> > can be treated as two very different beasts, you can read about it here
>> > [1]). Even amongst popular service providers the choice is split down
>> the
>> > middle, some providing support for both [2]
>> >
>> > We haven't bothered to implement support for oauth2.0 since there has
>> been
>> > no feedback or desire from operators to do so. Mostly, we don't want
>> > yet-another-delegation mechanism in keystone, we have trusts and
>> oauth1.0;
>> > should an enticing use case arise to include another, then we can
>> revisit
>> > the discussion.
>> >
>> > [1] https://hueniverse.com/2012/07/26/oauth-2-0-and-the-road-to-hell/
>> > [2] https://en.wikipedia.org/wiki/List_of_OAuth_providers
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>


-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Why not OAuth 2.0 provider?

2016-09-14 Thread Alexander Makarov
Actually OAuth support is my next step in "unified delegations" effort 
[0], so it's a good time to think about what version of it should be 
supported.


Along with that I have some concerns about OAuth v2, as IIRC authors 
themselves abandoned the spec. I'll check if something changed since 
that time.



On 13.09.2016 00:43, Steve Martinelli wrote:




Would you please shed some light on how to configure Keystone for
OAuth1? Thank you very much.


There is some documentation in the API but nothing formally written 
out: http://developer.openstack.org/api-ref/identity/v3-ext/index.html



I am trying to develop OAuth 2 client for Keystone. We will
contribute our OAuth 2 client source code to the community if we
can use Google/Facebook to log in to OpenStack through OAuth 2 client.


Currently you can setup keystone to work with Google / Facebook and 
other social logins. If you've setup keystone to use Shibboleth (which 
you did, I snipped that part of the message), then you can set it up 
to use these social logins as well. See documentation here: 
http://docs.openstack.org/developer/keystone/federation/federated_identity.html#id4


Thanks.

Best regards,

Winston Hong
Ottawa, Ontario
Canada


Steve Martinelli  Jun 27, 2016, 10:57 PM

> So, the os-oauth routes you mention in the documentation do not
make
> keystone a proper oauth provider. We simply perform delegation
(one user
> handing some level of permission on a project to another entity)
with the
> standard flow established in the oauth1.0b specification.
>
> Historically we chose oauth1.0 because one of the implementers
was very
> much against a flow based on oauth2.0 (though the names are
similar, these
> can be treated as two very different beasts, you can read about
it here
> [1]). Even amongst popular service providers the choice is split
down the
> middle, some providing support for both [2]
>
> We haven't bothered to implement support for oauth2.0 since
there has been
> no feedback or desire from operators to do so. Mostly, we don't
want
> yet-another-delegation mechanism in keystone, we have trusts and
oauth1.0;
> should an enticing use case arise to include another, then we
can revisit
> the discussion.
>
> [1]
https://hueniverse.com/2012/07/26/oauth-2-0-and-the-road-to-hell/

> [2] https://en.wikipedia.org/wiki/List_of_OAuth_providers


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN-0075] Deleted Glance image IDs may be reassigned

2016-09-14 Thread Luke Hinds
Deleted Glance image IDs may be reassigned
---

### Summary ###
It is possible for image IDs from deleted images to be reassigned to
other images.  This creates the possibility that:

 - Alice creates a VM that boots from image ID X which has been shared
 with her by a trusted party, Bob.
 - Bob (image X's owner) deletes the image.  As per design, Alice
 receives no notification this happened.
 - Mallory creates a new image and specifies that the ID should be X.
 - Mallory shares image X with Alice.  Again, per design, Alice is not
 notified of this change.
 - Alice boots her VM without realizing that the image has changed.

It's worth noting that in this scenario Mallory needs to know Alice's
project ID to share the new image with Alice.  This isn't enough to
mitigate the issue as project IDs weren't designed to be confidential.

Also, if the environment allows non-administrators to publish images,
Mallory doesn't have to explicitly share with Alice or know her project
ID to perform this attack.

### Affected Services / Software ###
Glance, Liberty, Mitaka, Newton

### Discussion ###
Glance's image table doesn't maintain a list of previously used image
IDs.  Previously assigned image IDs will be listed in the image table
as deleted, but these records may be removed (for performance reasons)
with the `glance-manage db purge` utility or manually by an
administrator.

If these records are removed a malicious user may intentionally upload
a new image using the same ID (Glance allows an image creator to
optionally specify the image ID).  This would cause any victim
instances referencing the ID to use an attacker supplied image.

### Recommended Actions ###
The combination of purged Glance database entries and non-admin image
upload is dangerous.  In environments where normal users are permitted
to upload images, the `images` table should not be purged.  It is
however safe to delete rows from `image_properties`, `image_tags`,
`image_members`, and `image_locations` tables.

### Contacts / References ###
Author: Travis McPeak, IBM
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0075
Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1593799/
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg




0x3C202614.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] resigning from heat-cores

2016-09-14 Thread Rico Lin
Thank you Pavlo, for everything you contribute to heat. Looking forward to
any chances to co-work on cross-project.

2016-09-14 14:39 GMT+08:00 Qiming Teng :

> Thank you very much for the helps. Good luck on your new journey!
>
> - Qiming
>
> On Mon, Sep 12, 2016 at 03:35:05PM +0300, Pavlo Shchelokovskyy wrote:
> > Hi Heaters,
> >
> > with great regret I announce my resignation from the heat-core team.
> >
> > About a year ago I was reassigned to another project, and despite my best
> > efforts I came to conclusion that unfortunately I can not keep up with
> > duties expected from Heat core team member in appropriate capacity.
> >
> > I do still work on OpenStack, so I'm not leaving the community
> altogether,
> > and will be available in e.g. IRC. I also have some ideas left to
> implement
> > in Heat, but, given the great community we've built around the project, I
> > could surely make it as an ordinary contributor.
> >
> > It was an honor to be a member of this team, I’ve learned a lot during
> this
> > time. Hope to see some of you in Barcelona :)
> >
> > Best regards,
> >
> > Dr. Pavlo Shchelokovskyy
> > Senior Software Engineer
> > Mirantis Inc
> > www.mirantis.com
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
May The Force of OpenStack Be With You,



*Rico LinChief OpenStack Technologist, inwinSTACK*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] resigning from heat-cores

2016-09-14 Thread Qiming Teng
Thank you very much for the helps. Good luck on your new journey!

- Qiming

On Mon, Sep 12, 2016 at 03:35:05PM +0300, Pavlo Shchelokovskyy wrote:
> Hi Heaters,
> 
> with great regret I announce my resignation from the heat-core team.
> 
> About a year ago I was reassigned to another project, and despite my best
> efforts I came to conclusion that unfortunately I can not keep up with
> duties expected from Heat core team member in appropriate capacity.
> 
> I do still work on OpenStack, so I'm not leaving the community altogether,
> and will be available in e.g. IRC. I also have some ideas left to implement
> in Heat, but, given the great community we've built around the project, I
> could surely make it as an ordinary contributor.
> 
> It was an honor to be a member of this team, I’ve learned a lot during this
> time. Hope to see some of you in Barcelona :)
> 
> Best regards,
> 
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-14 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 11:22 AM, Liping Mao (limao) 
wrote:

> You have a valid point regarding ipvlan support in newer kernel versions
>> but IIUC overlay mode might not help if nic has a limit on max number of
>> macs that it supports in hardware.
>>
>for example: http://www.brocade.com/content/html/en/
> configuration-guide/fastiron-08030b-securityguide/GUID-
> ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
> 
> Thanks vikas point out this.  Yes, It may cause problem if the mac of
> containers expose to hardware switch.
> In overlay case, AFAIK, hw should not learn container mac as it is in
> vxlan(gre) encapsulation.
>

gotcha, thanks Liping.

What is your opinion on the unicast macs limit that some drivers impose
which can enable promiscous mode on the vm if macvlan interfaces cross a
certain limit and thus may result into performance degradation by accepting
all the multicast/broadcast traffic within subnet ?

ipvlan has problems with dhcp and ipv6. I think its a topic worth
discussing.

-Vikas

>
>
> Regards,
> Liping Mao
>
> From: Vikas Choudhary 
> Reply-To: OpenStack List 
> Date: 2016年9月14日 星期三 下午1:10
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
>
>
> On Wed, Sep 14, 2016 at 10:33 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>>
>>
>> On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao) 
>> wrote:
>>
>>> > Though, not the best person to comment on macvlan vs ipvlan, one
>>> limitation of macvlan is that on physical interfaces, maximum possible
>>> number of random mac generations may not cope-up with large number of
>>> containers on same vm.
>>>
>>> Thanks, yes, it is a limitation, Vikas.
>>> This happened if you use vlan as tenant network. If tenant network use
>>> overlay mode, maybe it will be a little bit better for the mac problem.
>>> The reason why I mention macvlan can be one of choice is because ipvlan
>>> need a very new kernel , it maybe a little bit hard to use in prod
>>> env(AFAIK).
>>>
>>
>> You have a valid point regarding ipvlan support in newer kernel versions
>> but IIUC overlay mode might not help if nic has a limit on max number of
>> macs that it supports in hardware.
>>
>for example: http://www.brocade.com/content/html/en/configuration-
> guide/fastiron-08030b-securityguide/GUID-ED71C989-
> 6295-4175-8CFE-7EABDEE83E1F.html
> 
>
>>
>>
>
>>
>>
>>>
>>> Regards,
>>> Liping Mao
>>>
>>> From: Vikas Choudhary 
>>> Reply-To: OpenStack List 
>>> Date: 2016年9月14日 星期三 上午11:50
>>>
>>> To: OpenStack List 
>>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>
>>>
>>>
>>> On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
>>> wrote:
>>>
 Hi Ivan and Gary,

 maybe we can use macvlan as ipvlan need very new kernel.
 allow-address-pairs can aslo allow different mac in vm.
 Do we consider macvlan here? Thanks.

>>>
>>> Though, not the best person to comment on macvlan vs ipvlan, one
>>> limitation of macvlan is that on physical interfaces, maximum possible
>>> number of random mac generations may not cope-up with large number of
>>> containers on same vm.
>>>
>>>

 Regards,
 Liping Mao

 From: Liping Mao 
 Reply-To: OpenStack List 
 Date: 2016年9月13日 星期二 下午9:09
 To: OpenStack List 

 Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

 Hi Gary,

 I mean maybe that can be one choice in my mind.

 Security Group is for each neutron port,in this case,all the docker on
 one vm will share one neutron port(if I understand correct),then they will
 share the security group on that port,it is not per container per security
 group,not sure how to use security group in this case?

 Regards,
 Liping Mao

 在 2016年9月13日,20:31,Loughnane, Gary  写道:

 Hi Liping,



 Thank you for the feedback!



 Do you mean to have disabled security groups as an optional
 configuration for Kuryr?

 Do you have any opinion on the consequences/acceptability of disabling
 SG?



 Regards,

 Gary



 *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
 *Sent:* Tuesday, September 13, 2016 12:56 PM
 *To:* OpenStack Development Mailing List (not for usage questions) <