Re: [openstack-dev] [neutron][taas] Asynchronous TaaS APIs

2016-03-22 Thread reedip banerjee
Hi Anil,

>I think we should adopt an asynchronous model, where we maintain the state for 
>>tap-service and tap-flow objects. Valid states could be "created", 
>"create->pending" and "failed." In addition, we will need a suitable mechanism 
>to have >the plugin extract the current state from the agent/driver and 
>provide it to >the end-user.

I think we may also need Pending-Update, if there is any focus of
updating a tap-flow/tap-service in the future.

But yes, such states should exist as most of the processing should not
be presented to the user ( i.e. User should not wait for a CLI/UI
function to complete), specially if a lot of processing is required.

>For the former case,subsequent queries of the object's state will indicate if 
>the operation has completed, is still pending or has failed.

Instead of polling, a callback can act as an interrupt and inform the
frontend about Success/Failure of a job.


-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

2016-03-22 Thread Vega Cai
On 22 March 2016 at 12:09, Shinobu Kinjo  wrote:

> Thank you for your comment (inline for my message).
>
> On Tue, Mar 22, 2016 at 11:53 AM, Vega Cai  wrote:
> > Let me try to explain some.
> >
> > On 22 March 2016 at 10:09, Shinobu Kinjo  wrote:
> >>
> >> On Tue, Mar 22, 2016 at 10:22 AM, joehuang  wrote:
> >> > Hello, Shinobu,
> >> >
> >> > Yes, as what you described here, the "initialize" in "core.py" is used
> >> > for unit/function test only. For system integration test( for example,
> >> > tempest ), it would be better to use mysql like DB, this is done by
> the
> >> > configuration in DB part.
> >>
> >> Thank you for your thought.
> >>
> >> >
> >> > From my point of view, the tricircle DB part could be enhanced in the
> DB
> >> > model and migration scripts. Currently unit test use DB model to
> initialize
> >> > the data base, but not using the migration scripts,
> >>
> >> I'm assuming the migration scripts are in "tricircle/db". Is it right?
> >
> >
> > migration scripts are in tricircle/db/migrate_repo
> >>
> >>
> >> What is the DB model?
> >> Why do we need 2-way-methods at the moment?
> >
> >
> > DB models are defined in tricircle/db/models.py. Models.py defines
> tables in
> > object level, so other modules can import models.py then operate the
> tables
> > by operating the objects. Migration scripts defines tables in table
> level,
> > you define table fields, constraints in the scripts then migration tool
> will
> > read the scripts and build the tables.
>
> Dose "models.py" manage database schema(e.g., create / delete columns,
> tables, etc)?
>

In "models.py" we only define database schema. SQLAlchemy provides
functionality to create tables based on schema definition, which is
"ModelBase.metadata.create_all". This is used to initialized the in-memory
database for tests currently.

>
> > Migration tool has a feature to
> > generate migration scripts from DB models automatically but it may make
> > mistakes sometimes, so currently we manually maintain the table
> structure in
> > both DB model and migration scripts.
>
> Is *migration tool* different from bot DB models and migration scripts?
>

Migration tool is Alembic, a lightweight database migration tool for usage
of SQLAlchemy:

https://alembic.readthedocs.org/en/latest/

It runs migration scripts to update database schema. Each database version
has one migrate script. After defining "upgrade" and "downgrade" method in
the script, you can update your database from one version to another
version. Alembic isn't aware of DB models defined in "models.py", users
need to guarantee the version of database and the version of "models.py"
match.

If you create a new database, both "ModelBase.metadata.create_all" and
Alembic can be used. But Alembic can also be used to update an existing
database to a specific version of schema.

>
> >>
> >>
> >> > so the migration scripts can only be tested when using devstack for
> >> > integration test. It would better to using migration script to
> instantiate
> >> > the DB, and tested in the unit test too.
> >>
> >> If I understand you correctly, we are moving forward to using the
> >> migration scripts for both unit and integration tests.
> >>
> >> Cheers,
> >> Shinobu
> >>
> >> >
> >> > (Also move the discussion to the openstack-dev mail-list)
> >> >
> >> > Best Regards
> >> > Chaoyi Huang ( joehuang )
> >> >
> >> > -Original Message-
> >> > From: Shinobu Kinjo [mailto:ski...@redhat.com]
> >> > Sent: Tuesday, March 22, 2016 7:43 AM
> >> > To: joehuang; khayam.gondal; zhangbinsjtu; shipengfei92; newypei;
> >> > Liuhaixia; caizhiyuan (A); huangzhipeng
> >> > Subject: Using in-memory database for unit tests
> >> >
> >> > Hello,
> >> >
> >> > In "initialize" method defined in "core.py", we're using *in-memory*
> >> > strategy making use of sqlite. AFAIK we are using this solution for
> only
> >> > testing purpose. Unit tests using this solution should be fine for
> small
> >> > scale environment. But it's not good enough even it's for testing.
> >> >
> >> > What do you think?
> >> > Any thought, suggestion would be appreciated.
> >> >
> >> > [1]
> >> >
> https://github.com/openstack/tricircle/blob/master/tricircle/db/core.py#L124-L127
> >> >
> >> > Cheers,
> >> > Shinobu
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> --
> >> Email:
> >> shin...@linux.com
> >> GitHub:
> >> shinobu-x
> >> Blog:
> >> Life with Distributed Computational System based on OpenSource
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:

Re: [openstack-dev] [tricircle] playing tricircle with two node configuration

2016-03-22 Thread Shinobu Kinjo
On Wed, Mar 23, 2016 at 12:41 PM, joehuang  wrote:
> Hi, Yipei,
>
>
>
> When you play Tricircle, it’s important to know that Tricircle is the
> OpenStack API gateway to other OpenStack instances. In the Readme, Pod1,
> Pod2 are two OpenStack instances, before trying Tricircle, you can make sure
> the environment is normal or not by executeing command separately on Pod1,
> Pod2, just  Nova –os-region-name Pod1, or Nova –os-region-name Pod2, in
> fact, because Pod1,Pod2 are two normal OpenStack instances, any command to
> Pod1,Pod2 should be successful.  Otherwise that means there are some issue
> in the installation of the environment itself. Only when each bottom
> OpenStack can work correctly, then you can even manually add Tricircle, or
> through the scripts in the github to install Tircircle automaticly, as the
> API gateway to Pod1 and Pod2, just like you add one load balancer before
> your multiple web servers.

Yeah, above explanation is really essential for the tricircle.

>
>
>
> After the Tricircle was added, then the API will flow from Tricircle
> services like Nova-APIGW/Cinder-APIGW/Neutron API to the bottom Pod1, Pod2.
>
>
>
> So if you use Nova boot, and some error happened, you can ask question:
>
> 1.  Is the command sent to the Tricircle Nova-APIGW?
>
> 2.  What’ll will do for the Nova-APIGW for the next step?
>
> 3.  Is the API request forwarded by Tricircle correctly to the proper
> bottom OpenStack?
>
> 4.  Is the bottom OpenStack working normal even without Tricircle?
>
> 5.  Is the API request forwarded by Tricircle includes the correct
> request content?
>
> 6.  …
>
>
>
> You can carry the map before you try to fix the issue. And break down a big
> system into smaller part, and make sure which part works fine, which not in
> order.
>
>
>
> From the information you provided, can’t make judgment the error is occurred
> at Tricircle services, or bottom pod, or which pod. Don’t know which step
> the error occurred. And don’t know the request information, how the
> requested will be routed and processed, a lot of context needed to diagnose
> an error.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
> From: Yipei Niu [mailto:newy...@gmail.com]
> Sent: Wednesday, March 23, 2016 10:36 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: joehuang; Zhiyuan Cai
> Subject: [tricircle] playing tricircle with two node configuration
>
>
>
> Hi, Joe and Zhiyuan,
>
>
>
> I have already finished installing devstack in two nodes with tricircle. I
> encounter some errors when testing cross-pod L3 networking with DevStack. I
> followed the README.md in github, every thing goes well until I boot virtual
> machines with the following command:
>
>
>
> nova boot --flavor 1 --image 60a8184b-a4be-463d-a8a1-48719edc37a3 --nic
> net-id=76356099-f3bd-40a5-83bd-600b78b671eb --availability-zone az1 vm1
>
>
>
> The info in the terminal is as follows:
>
> Your request was processed by a Nova API which does not support
> microversions (X-OpenStack-Nova-API-Version header is missing from
> response). Warning: Response may be incorrect.
>
> Your request was processed by a Nova API which does not support
> microversions (X-OpenStack-Nova-API-Version header is missing from
> response). Warning: Response may be incorrect.
>
> Your request was processed by a Nova API which does not support
> microversions (X-OpenStack-Nova-API-Version header is missing from
> response). Warning: Response may be incorrect.
>
> ERROR (ClientException): Unknown Error (HTTP 500)
>
>
>
> I run rejoin-stack.sh and find some error in n-api screen. In n-api.log, the
> error is as follows:
>
> 2016-03-22 19:19:38.248 ^[[01;31mERROR nova.api.openstack.extensions
> [^[[01;36mreq-cf58e7aa-bd7d-483f-aa57-bca5268ce963 ^[[00;36madmin
> admin^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mTraceback (most recent call last):
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py",
> line 478, in wrapped
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mreturn f(*args, **kwargs)
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py",
> line 73, in wrapper
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mreturn func(*args, **kwargs)
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py",
> line 73, in wrapper
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mreturn func(*args, **kwargs)
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> 

Re: [openstack-dev] [tricircle] playing tricircle with two node configuration

2016-03-22 Thread Yipei Niu
OK. Got it. Thanks a lot for your help!

Best regards,
Yipei

On Wed, Mar 23, 2016 at 11:41 AM, joehuang  wrote:

> Hi, Yipei,
>
>
>
> When you play Tricircle, it’s important to know that Tricircle is the
> OpenStack API gateway to other OpenStack instances. In the Readme, Pod1,
> Pod2 are two OpenStack instances, before trying Tricircle, you can make
> sure the environment is normal or not by executeing command separately on
> Pod1, Pod2, just  Nova –os-region-name Pod1, or Nova –os-region-name Pod2,
> in fact, because Pod1,Pod2 are two normal OpenStack instances, any command
> to Pod1,Pod2 should be successful.  Otherwise that means there are some
> issue in the installation of the environment itself. Only when each bottom
> OpenStack can work correctly, then you can even manually add Tricircle, or
> through the scripts in the github to install Tircircle automaticly, as the
> API gateway to Pod1 and Pod2, just like you add one load balancer before
> your multiple web servers.
>
>
>
> After the Tricircle was added, then the API will flow from Tricircle
> services like Nova-APIGW/Cinder-APIGW/Neutron API to the bottom Pod1, Pod2.
>
>
>
> So if you use Nova boot, and some error happened, you can ask question:
>
> 1.  Is the command sent to the Tricircle Nova-APIGW?
>
> 2.  What’ll will do for the Nova-APIGW for the next step?
>
> 3.  Is the API request forwarded by Tricircle correctly to the proper
> bottom OpenStack?
>
> 4.  Is the bottom OpenStack working normal even without Tricircle?
>
> 5.  Is the API request forwarded by Tricircle includes the correct
> request content?
>
> 6.  …
>
>
>
> You can carry the map before you try to fix the issue. And break down a
> big system into smaller part, and make sure which part works fine, which
> not in order.
>
>
>
> From the information you provided, can’t make judgment the error is
> occurred at Tricircle services, or bottom pod, or which pod. Don’t know
> which step the error occurred. And don’t know the request information, how
> the requested will be routed and processed, a lot of context needed to
> diagnose an error.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
> *From:* Yipei Niu [mailto:newy...@gmail.com]
> *Sent:* Wednesday, March 23, 2016 10:36 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* joehuang; Zhiyuan Cai
> *Subject:* [tricircle] playing tricircle with two node configuration
>
>
>
> Hi, Joe and Zhiyuan,
>
>
>
> I have already finished installing devstack in two nodes with tricircle. I
> encounter some errors when testing cross-pod L3 networking with DevStack. I
> followed the README.md in github, every thing goes well until I boot
> virtual machines with the following command:
>
>
>
> nova boot --flavor 1 --image 60a8184b-a4be-463d-a8a1-48719edc37a3 --nic
> net-id=76356099-f3bd-40a5-83bd-600b78b671eb --availability-zone az1 vm1
>
>
>
> The info in the terminal is as follows:
>
> Your request was processed by a Nova API which does not support
> microversions (X-OpenStack-Nova-API-Version header is missing from
> response). Warning: Response may be incorrect.
>
> Your request was processed by a Nova API which does not support
> microversions (X-OpenStack-Nova-API-Version header is missing from
> response). Warning: Response may be incorrect.
>
> Your request was processed by a Nova API which does not support
> microversions (X-OpenStack-Nova-API-Version header is missing from
> response). Warning: Response may be incorrect.
>
> ERROR (ClientException): Unknown Error (HTTP 500)
>
>
>
> I run rejoin-stack.sh and find some error in n-api screen. In n-api.log,
> the error is as follows:
>
> 2016-03-22 19:19:38.248 ^[[01;31mERROR nova.api.openstack.extensions
> [^[[01;36mreq-cf58e7aa-bd7d-483f-aa57-bca5268ce963 ^[[00;36madmin
> admin^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mTraceback (most recent call last):
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py",
> line 478, in wrapped
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mreturn f(*args, **kwargs)
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py",
> line 73, in wrapper
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mreturn func(*args, **kwargs)
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py",
> line 73, in wrapper
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> ^[[01;35m^[[00mreturn func(*args, **kwargs)
>
> ^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
> 

[openstack-dev] [nova] how to recover instances with different hypervisor-id on the same compute node

2016-03-22 Thread Rahul Sharma
Hi All,

Due to a hostname change, we ended up having a new hypervisor-id for one of
our compute nodes. It was already running two instances and then, we didn't
catch the updated hypervisor-id. This ended up with new instances spawning
up with new hypervisor-id.

Now, the instances with older hypervisor-id are not seen in "virsh list",
rebooting those instances keep them rebooting.

[root (admin)]# nova hypervisor-servers compute-90
+--+---+---+-+
| ID   | Name  | Hypervisor ID
| Hypervisor Hostname |
+--+---+---+-+
| 9688dcef-2836-496f-8b70-099638b73096 | instance-0712 | 40
 | compute-90.test.edu |
| f7373cd6-96a0-4643-9137-732ea5353e94 | instance-0b74 | 40
 | compute-90.test.edu |
| c8926585-a260-45cd-b008-71df2124b364 | instance-1270 | 92
 | compute-90.test.edu |
| a0aa3f5f-d49b-43a6-8465-e7865bb68d57 | instance-18de | 92
 | compute-90.test.edu |
| d729f9f4-fcae-4abe-803c-e9474e533a3b | instance-16e0 | 92
 | compute-90.test.edu |
| 30a6a05d-a170-4105-9987-07a875152907 | instance-17e4 | 92
 | compute-90.test.edu |
| 6e0fa25b-569d-4e9e-b57d-4c182c1c23ea | instance-18f8 | 92
 | compute-90.test.edu |
| 5964f6cc-eec3-493a-81fe-7fb616c89a8f | instance-18fa | 92
 | compute-90.test.edu |
+--+---+---+-+

[root@compute-90]# virsh list
 IdName   State

 112   instance-1270  running
 207   instance-18de  running
 178   instance-16e0  running
 189   instance-17e4  running
 325   instance-18f8  running
 336   instance-18fa  running

Instances not visible: instance-0712 and instance-0b74

Is there a way to recover from this step? I can delete the old
services(nova service-delete ), but I am confused whether it will lead
to loss of already running instances with old hypervisor-id? Is there a way
I can update the state of those instances to use hypervisor-id as 92
instead of 40? Kindly do let me know if you have any suggestions.

Thanks.

*Rahul Sharma*
*MS in Computer Science, 2016*
College of Computer and Information Science, Northeastern University
Mobile:  801-706-7860
Email: rahulsharma...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Mar. 23

2016-03-22 Thread joehuang
Hi,

IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

Agenda:
# Mitaka release preparation
#Newton release features discussion
# Link: https://etherpad.openstack.org/p/TricircleToDo

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with two node configuration

2016-03-22 Thread joehuang
Hi, Yipei,

When you play Tricircle, it’s important to know that Tricircle is the OpenStack 
API gateway to other OpenStack instances. In the Readme, Pod1, Pod2 are two 
OpenStack instances, before trying Tricircle, you can make sure the environment 
is normal or not by executeing command separately on Pod1, Pod2, just  Nova 
–os-region-name Pod1, or Nova –os-region-name Pod2, in fact, because Pod1,Pod2 
are two normal OpenStack instances, any command to Pod1,Pod2 should be 
successful.  Otherwise that means there are some issue in the installation of 
the environment itself. Only when each bottom OpenStack can work correctly, 
then you can even manually add Tricircle, or through the scripts in the github 
to install Tircircle automaticly, as the API gateway to Pod1 and Pod2, just 
like you add one load balancer before your multiple web servers.

After the Tricircle was added, then the API will flow from Tricircle services 
like Nova-APIGW/Cinder-APIGW/Neutron API to the bottom Pod1, Pod2.

So if you use Nova boot, and some error happened, you can ask question:

1.  Is the command sent to the Tricircle Nova-APIGW?

2.  What’ll will do for the Nova-APIGW for the next step?

3.  Is the API request forwarded by Tricircle correctly to the proper 
bottom OpenStack?

4.  Is the bottom OpenStack working normal even without Tricircle?

5.  Is the API request forwarded by Tricircle includes the correct request 
content?

6.  …

You can carry the map before you try to fix the issue. And break down a big 
system into smaller part, and make sure which part works fine, which not in 
order.

From the information you provided, can’t make judgment the error is occurred at 
Tricircle services, or bottom pod, or which pod. Don’t know which step the 
error occurred. And don’t know the request information, how the requested will 
be routed and processed, a lot of context needed to diagnose an error.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Yipei Niu [mailto:newy...@gmail.com]
Sent: Wednesday, March 23, 2016 10:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: joehuang; Zhiyuan Cai
Subject: [tricircle] playing tricircle with two node configuration

Hi, Joe and Zhiyuan,

I have already finished installing devstack in two nodes with tricircle. I 
encounter some errors when testing cross-pod L3 networking with DevStack. I 
followed the README.md in github, every thing goes well until I boot virtual 
machines with the following command:

nova boot --flavor 1 --image 60a8184b-a4be-463d-a8a1-48719edc37a3 --nic 
net-id=76356099-f3bd-40a5-83bd-600b78b671eb --availability-zone az1 vm1

The info in the terminal is as follows:
Your request was processed by a Nova API which does not support microversions 
(X-OpenStack-Nova-API-Version header is missing from response). Warning: 
Response may be incorrect.
Your request was processed by a Nova API which does not support microversions 
(X-OpenStack-Nova-API-Version header is missing from response). Warning: 
Response may be incorrect.
Your request was processed by a Nova API which does not support microversions 
(X-OpenStack-Nova-API-Version header is missing from response). Warning: 
Response may be incorrect.
ERROR (ClientException): Unknown Error (HTTP 500)

I run rejoin-stack.sh and find some error in n-api screen. In n-api.log, the 
error is as follows:
2016-03-22 19:19:38.248 ^[[01;31mERROR nova.api.openstack.extensions 
[^[[01;36mreq-cf58e7aa-bd7d-483f-aa57-bca5268ce963 ^[[00;36madmin 
admin^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
478, in wrapped
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py", line 
73, in wrapper
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn func(*args, **kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py", line 
73, in wrapper
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn func(*args, **kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/compute/servers.py", 
line 604, in create
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m**create_kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/hooks.py", line 149, in inner

[openstack-dev] [Neutron]Debug with Pycharm

2016-03-22 Thread Nguyen Hoai Nam
Hi everybody,
Have you configured PyCharm to debug Neutron project. I confuged but it's
not ok. If you have any archive, could you please share it with openstacker
?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] PTL Voting is now open

2016-03-22 Thread Tony Breeds
On Fri, Mar 18, 2016 at 11:13:25AM +1100, Tony Breeds wrote:
> Elections are underway and will remain open for you to cast your vote until
> 2016-03-24, 23:59 UTC

We've been informed that CIVS doesn't work correctly with a number of languages
important in our community, including but not limited to Russian.

If you're unable to vote and are receiving the following error:
 http://paste.openstack.org/show/491226/

Please temporarily alter your language settings to prefer English while you 
vote.

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Plan for changes affecting DB schema

2016-03-22 Thread na...@vn.fujitsu.com
Thanks for your answer. 

-Original Message-
From: Henry Gessau [mailto:hen...@gessau.net] 
Sent: Tuesday, March 22, 2016 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Plan for changes affecting DB schema

na...@vn.fujitsu.com  wrote:
> Two weeks ago, I received an information about changing affecting DB 
> schema [1] from Henry Gessau just a day before the deadline. I was so 
> surprised about this and could not change my plan for my patch sets. 
> Do you know any plan for this ?

There should be no surprises. Neutron follows the OpenStack release schedule 
[1]. For Mitaka, it looked like [2].

> In the future, do you have plan for this in Netwon cycle?

The Newton release schedule is at [3], although the details are still being 
planned. The detailed dates should be available soo.

[1] http://releases.openstack.org
[2] http://releases.openstack.org/mitaka/schedule.html
[3] http://releases.openstack.org/newton/schedule.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] playing tricircle with two node configuration

2016-03-22 Thread Yipei Niu
Hi, Joe and Zhiyuan,

I have already finished installing devstack in two nodes with tricircle. I
encounter some errors when testing cross-pod L3 networking with DevStack. I
followed the README.md in github, every thing goes well until I boot
virtual machines with the following command:

nova boot --flavor 1 --image 60a8184b-a4be-463d-a8a1-48719edc37a3 --nic
net-id=76356099-f3bd-40a5-83bd-600b78b671eb --availability-zone az1 vm1

The info in the terminal is as follows:
Your request was processed by a Nova API which does not support
microversions (X-OpenStack-Nova-API-Version header is missing from
response). Warning: Response may be incorrect.
Your request was processed by a Nova API which does not support
microversions (X-OpenStack-Nova-API-Version header is missing from
response). Warning: Response may be incorrect.
Your request was processed by a Nova API which does not support
microversions (X-OpenStack-Nova-API-Version header is missing from
response). Warning: Response may be incorrect.
ERROR (ClientException): Unknown Error (HTTP 500)

I run rejoin-stack.sh and find some error in n-api screen. In n-api.log,
the error is as follows:
2016-03-22 19:19:38.248 ^[[01;31mERROR nova.api.openstack.extensions
[^[[01;36mreq-cf58e7aa-bd7d-483f-aa57-bca5268ce963 ^[[00;36madmin
admin^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py",
line 478, in wrapped
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mreturn f(*args, **kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py",
line 73, in wrapper
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mreturn func(*args, **kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py",
line 73, in wrapper
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mreturn func(*args, **kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 604, in create
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m**create_kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/hooks.py", line 149, in inner
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mrv = f(*args, **kwargs)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 1504, in
create
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mcheck_server_group_quota=check_server_group_quota)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 1097, in
_create_instance
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mauto_disk_config, reservation_id, max_count)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 871, in
_validate_and_build_base_options
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mpci_request_info, requested_networks)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/network/neutronv2/api.py", line
981, in create_pci_requests_for_sriov_ports
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mneutron = get_client(context, admin=True)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/network/neutronv2/api.py", line
149, in get_client
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m_ADMIN_AUTH = _load_auth_plugin(CONF)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m  File "/opt/stack/nova/nova/network/neutronv2/api.py", line
125, in _load_auth_plugin
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mraise neutron_client_exc.Unauthorized(message=err_msg)
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00mUnauthorized: Unknown auth plugin: None
^[[01;31m2016-03-22 19:19:38.248 TRACE nova.api.openstack.extensions
^[[01;35m^[[00m
2016-03-22 20:04:19.992 ^[[00;36mINFO nova.api.openstack.wsgi
[^[[01;36mreq-ed35efe8-5dc0-40b0-bb2b-c1a73618aa50 ^[[00;36madmin

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-22 Thread Georgy Okrokvertskhov
On Tue, Mar 22, 2016 at 6:01 PM, Fox, Kevin M  wrote:

> +1 for TripleO taking a look at Kolla.
>
> Some random thoughts:
>
> I'm in the middle of deploying a new cloud and I couldn't use either
> TripleO or Kolla for various reasons. A few reasons for each:
>  * TripeO - worries me for ever having to do a major upgrade of the
> software, or needing to do oddball configs like vxlans over ipoib.
>  * Kolla - At the time it was still immature. No stable artefacts posted.
> database container recently broke, little documentation for disaster
> recovery. No upgrade strategy at the time.
>
> Kolla rearchitected recently to support oddball configs like we've had to
> do at times. They also recently gained upgrade support. I think they are on
> the right path. If I had to start fresh, I'd very seriously consider using
> it.
>
> I think Kolla can provide the missing pieces that TripleO needs. TripleO
> has bare metal deployment down solid. I really like the idea of using
> OpenStack to deploy OpenStack. Kolla is now OpenStack so should be
> considered.
>
> I'm also in favor of using Magnum to deploy a COE to manage Kolla. I'm
> much less thrilled about Mesos though. It feels heavy enough weight that it
> feels like your deploying an OpenStack like system just to deploy
> OpenStack. So, OpenStack On NotOpenStack On OpenStack. :/ I've had good
> luck with Kubernetes (much simpler) recently and am disappointed that it
> was too immature at the time Kolla originally considered it. It seems much
> more feasible to use now. I use net=host like features all the time which
> was a major sticking point before.
>

Frankly speaking, I think that Kolla project is doing right thing keeping
its component decoupled. There are two parts of Kolla-mesos approach: use
micro-services approach inside containers (service discovery,
self-configuration etc..) and use Mesos/Marathon definitions to control
containers placement. If you want to use Kubernetes you still can use Kolla
images and you will need to write simple PODs\ReplicationGroups
definitions. OpenStack services will configure themselves by using central
configuration storage (ZooKeeper for now but other storages are on their
way in oslo.config BP
 )

>
> I'd be interested in seeing TripeO use the Ansible version for now since
> that's working, stable, and supports upgrades/oddball configs. Then in the
> future as Kubernetes support or maybe Mesos support matures, consider that.
> Kolla's going to have to have a migration path from one to the other
> eventually... I think this would allow TripeO to really come into its own
> as an end to end, production ready system sooner.
>
> Thanks,
> Kevin
>
>
>
>
> ___
> From: Emilien Macchi [emil...@redhat.com]
> Sent: Tuesday, March 22, 2016 5:18 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of
> Heat, containers, and the future of TripleO
>
> This is quite a long mail, I'll reply to some statements inline.
>
> On Mon, Mar 21, 2016 at 4:14 PM, Zane Bitter  wrote:
> > tl;dr Containers represent a massive, and also mandatory, opportunity for
> > TripleO. Lets start thinking about ways that we can take maximum
> advantage
> > to achieve the goals of the project.
> >
> > Now that you have the tl;dr I'm going to start from the beginning, so
> settle
> > in and grab yourself a cup of coffee or other poison of your choice.
> >
> > After working on developing Heat from the very beginning of the project
> in
> > early 2012 and debugging a bunch of TripleO deployments in the field, it
> is
> > my considered opinion that Heat is a poor fit for the workloads that
> TripleO
> > is currently asking of it. To illustrate why, I need to explain what it
> is
> > that Heat is really designed to do.
> >
> > Here's a theoretical example of how I've always imagined Heat software
> > deployments would make Heat users' lives better. For simplicity, I'm just
> > going to model two software components, a user-facing service that
> connects
> > to some back-end service:
> >
> >   resources:
> > backend_component:
> >   type: OS::Heat::SoftwareComponent
> >   properties:
> > configs:
> >   - tool: script
> > actions:
> >   - CREATE
> >   - UPDATE
> > config: |
> >   PORT=$(get_backend_port || random_port)
> >   stop_backend
> >   start_backend $DEPLOY_VERSION $PORT $CONFIG
> >   addr="$(hostname):$(get_backend_port)"
> >   printf '%s' "$addr" >${heat_outputs_path}.host_and_port
> >   - tool: script
> > actions:
> >   - DELETE
> > config: |
> >stop_backend
> >  inputs:
> >- name: DEPLOY_VERSION
> >- name: 

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-22 Thread Fox, Kevin M
+1 for TripleO taking a look at Kolla.

Some random thoughts:

I'm in the middle of deploying a new cloud and I couldn't use either TripleO or 
Kolla for various reasons. A few reasons for each:
 * TripeO - worries me for ever having to do a major upgrade of the software, 
or needing to do oddball configs like vxlans over ipoib.
 * Kolla - At the time it was still immature. No stable artefacts posted. 
database container recently broke, little documentation for disaster recovery. 
No upgrade strategy at the time.

Kolla rearchitected recently to support oddball configs like we've had to do at 
times. They also recently gained upgrade support. I think they are on the right 
path. If I had to start fresh, I'd very seriously consider using it.

I think Kolla can provide the missing pieces that TripleO needs. TripleO has 
bare metal deployment down solid. I really like the idea of using OpenStack to 
deploy OpenStack. Kolla is now OpenStack so should be considered.

I'm also in favor of using Magnum to deploy a COE to manage Kolla. I'm much 
less thrilled about Mesos though. It feels heavy enough weight that it feels 
like your deploying an OpenStack like system just to deploy OpenStack. So, 
OpenStack On NotOpenStack On OpenStack. :/ I've had good luck with Kubernetes 
(much simpler) recently and am disappointed that it was too immature at the 
time Kolla originally considered it. It seems much more feasible to use now. I 
use net=host like features all the time which was a major sticking point before.

I'd be interested in seeing TripeO use the Ansible version for now since that's 
working, stable, and supports upgrades/oddball configs. Then in the future as 
Kubernetes support or maybe Mesos support matures, consider that. Kolla's going 
to have to have a migration path from one to the other eventually... I think 
this would allow TripeO to really come into its own as an end to end, 
production ready system sooner.

Thanks,
Kevin




___
From: Emilien Macchi [emil...@redhat.com]
Sent: Tuesday, March 22, 2016 5:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

This is quite a long mail, I'll reply to some statements inline.

On Mon, Mar 21, 2016 at 4:14 PM, Zane Bitter  wrote:
> tl;dr Containers represent a massive, and also mandatory, opportunity for
> TripleO. Lets start thinking about ways that we can take maximum advantage
> to achieve the goals of the project.
>
> Now that you have the tl;dr I'm going to start from the beginning, so settle
> in and grab yourself a cup of coffee or other poison of your choice.
>
> After working on developing Heat from the very beginning of the project in
> early 2012 and debugging a bunch of TripleO deployments in the field, it is
> my considered opinion that Heat is a poor fit for the workloads that TripleO
> is currently asking of it. To illustrate why, I need to explain what it is
> that Heat is really designed to do.
>
> Here's a theoretical example of how I've always imagined Heat software
> deployments would make Heat users' lives better. For simplicity, I'm just
> going to model two software components, a user-facing service that connects
> to some back-end service:
>
>   resources:
> backend_component:
>   type: OS::Heat::SoftwareComponent
>   properties:
> configs:
>   - tool: script
> actions:
>   - CREATE
>   - UPDATE
> config: |
>   PORT=$(get_backend_port || random_port)
>   stop_backend
>   start_backend $DEPLOY_VERSION $PORT $CONFIG
>   addr="$(hostname):$(get_backend_port)"
>   printf '%s' "$addr" >${heat_outputs_path}.host_and_port
>   - tool: script
> actions:
>   - DELETE
> config: |
>stop_backend
>  inputs:
>- name: DEPLOY_VERSION
>- name: CONFIG
>  outputs:
>- name: host_and_port
>
> frontend_component:
>   type: OS::Heat::SoftwareComponent
>   properties:
> configs:
>   - tool: script
> actions:
>   - CREATE
>   - UPDATE
> config: |
>   stop_frontend
>   start_frontend $DEPLOY_VERSION $BACKEND_ADDR $CONFIG
>   - tool: script
> actions:
>   - DELETE
> config: |
>   stop_frontend
> inputs:
>   - name: DEPLOY_VERSION
>   - name: BACKEND_ADDR
>   - name: CONFIG
>
> backend:
>   type: OS::Heat::SoftwareDeployment
>   properties:
> server: {get_resource: backend_server}
> name: {get_param: backend_version} # Forces upgrade replacement
> actions: [CREATE, UPDATE, DELETE]
> config: 

[openstack-dev] [release]how to release an non-official project in Mitaka

2016-03-22 Thread joehuang
Hi,

Thanks for the help. There is a plan for not only Tricircle but also Kingbird 
to do a release in Mitaka, both of them are not OpenStack official project yet. 
The question is whether these projects can leverage the facility  
https://github.com/openstack/releases to do a release, or is there any guide 
how to do the release work by themselves for new projects? Or just tagging is 
enough.

For the question is common for non-official projects, the discussion is opened 
in the openstack-dev maillist.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Rochelle Grober
Sent: Wednesday, March 23, 2016 7:09 AM
To: Rochelle Grober; joehuang; huangzhipeng; Gordon Chung; Thierry Carrez
Subject: RE: RE: RE: RE: Mitaka Release of Tricircle

I pinged Thierry on IRC (he's Release Core) about how to perform and announce 
the release.  He said he'd be "Happy to consult" on how to do a good citizen 
release and announcement for Tricircle.

I've included him on the thread so he gets the context of our discussion thus 
far.

--Rocky

From: Rochelle Grober
Sent: Tuesday, March 22, 2016 9:01 AM
The best place to ask this question is on IRC in the #openstack-release 
channel. Doug Hellmann is in UTC-4 time zone (EDT) and on all day. He goes by 
dhellmann and is release PTL. This is a great question for TC discussion, too. 
I'll seek if there is time during open discussion.

But, the important thing is to announce the release the same way projects do, 
with the tricircle tag in the subject and identification that tricircle is not 
part of the big ten, but is part of the ecosystem.

--Rocky


Sent from HUAWEI AnyOffice
From: Gordon Chung
Hi,

The release:managed tag is only applicable to big-tent projects so this won’t 
work for Tricircle. i do know there is discussion to possibly expand to non 
big-tent projects but that is not currently decided.
--
gord

From: joehuang
hi, Gord,
Thanks for your information.
but even send a release patch by PTL
 signoff, will the release procedure be
 done automaticly for non-official project?

Sent from HUAWEI AnyOffice
发件人:Gordon Chung


Hi huangzhipeng,

It means that if tricircle wants to create a release, the PTL of project needs 
to either send the patch to openstack/releases OR they must +1 the patch if 
someone else sends a release patch.

--
gord

From: huangzhipeng

Hi Gord,

What is a PTL sign off ?

Sent from HUAWEI AnyOffice
From:Gordon Chung


Sorry, hit send to early, from what I can tell. The release:managed tag is only 
applicable to big-tent projects so this won’t work for Tricircle. i do know 
there is discussion to possibly expand to non big-tent projects but that is not 
currently decided.

--
gord

From: joehuang

Hello,

We have a plan for the first release in Mitaka for Tricircle. Tricircle is not 
an official project yet. What shall we do for the release? Only tagging may be 
not enough.

Is it possible for non-official project to use the facility 
https://github.com/openstack/releases to do release?

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][ptls] Newton summit planning etherpad

2016-03-22 Thread Amrith Kumar
I've added Keystone to the subject line.

Keystone folks, please see email below from Steve!

-amrith

> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Tuesday, March 22, 2016 6:08 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Newton summit planning etherpad
> 
> On 03/22/2016 05:42 PM, Steve Martinelli wrote:
> >
> > The summit planning etherpad for Keystone is here:
> > https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm
> >
> > Please brainstorm / toss ideas up / discuss Newton cycle goals
> >
> > Thanks,
> >
> > Steve Martinelli
> > OpenStack Keystone Project Team Lead
> >
> >
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> You may want to add keystone as a tag in your subject line otherwise you
> might get suggestions you aren't expecting.
> 
> Thanks,
> Anita.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newton summit planning etherpad

2016-03-22 Thread Matt Riedemann



On 3/22/2016 5:07 PM, Anita Kuno wrote:

On 03/22/2016 05:42 PM, Steve Martinelli wrote:


The summit planning etherpad for Keystone is here:
https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm

Please brainstorm / toss ideas up / discuss Newton cycle goals

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


You may want to add keystone as a tag in your subject line otherwise you
might get suggestions you aren't expecting.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I made my contribution.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-22 Thread Monty Taylor

On 03/22/2016 06:27 PM, Kevin Carter wrote:

Comments in-line.

On 03/22/2016 02:46 PM, Adrian Otto wrote:

Team,

This thread is a continuation of a branch of the previous High
Availability thread [1]. As the Magnum PTL, I’ve been aware of a number
of different groups who have started using Magnum in recent months. For
various reasons, there have been multiple requests for information about
how to turn off the dependency on Barbican, which we use for secure
storage of TLS certificates that are used to secure communications
between various components of the software hosted on Magnum Bay
resources. Examples of this are Docker Swarm, and Kubernetes, which we
affectionately refer to as COEs (Container Orchestration Engines). The
only alternative to Barbican currently offered in Magnum is a local file
option, which is only intended to be used for testing, as the
certificates are stored unencrypted on a local filesystem where the
conductor runs, and when you use this option, you can’t scale beyond a
single conductor.

Although our whole community agrees that using Barbican is the right
long term solution for deployments of Magnum, we still wish to make the
friction of adopting Magnum to be as low as possible without completely
compromising all security best practices.


/me wearing my deployer hat now: Many of my customers and product folks
want Magnum but they also want magnum to be as secure and stable as
possible. If Barbican is the best long term solution for the project it
would make sense to me that Magnum remain on course with Barbican as the
defacto way of deploying in production. IMHO building alternative means
for certificate management is a distraction and will only confuse folks
looking to deploy Magnum into production.


I'm going to agree. This reminds me of people who didn't want to run 
keystone back in the day. Those people were a distraction, and placating 
them hampered OpenStack's progress by probably several years.



Some ops teams are willing to
adopt a new service, but not two. They only want to add Magnum and not
Barbican.


It would seem to me that once the Barbican dependency is well
documented, which it should be at this point, Barbican is be easy to
accept especially with the understanding of why it is needed. Many of
the deployment projects are creating the automation needed to make the
adoption of services simpler and I'd imagine deployment automation is
the largest hurdle to wide spread adoption for both Barbican and Magnum.
If the OPS team you mention does not want both services it would seem
they can use "local file" option; this is similar to Cinder+LVM and
Glance+file_store both of which have scale operational issues in production.


Agree.


We think that once those operators become familiar with
Magnum, adding Barbican will follow. In the mean time, we’d like to
offer a Barbican alternative that allows Magnum to scale beyond one
conductor, and allows for encrypted storage of TLC credentials needed
for unattended bay operations.


If all of this is to simplify or provide for the developer/"someone
kicking the tires" use case I'd imagine the "local file" storage would
be sufficient. If the acceptance of Barbican is too much to handle or
introduce into an active deployment (I'm not sure why that would be
especially if they're adding Magnum), the synchronization of locally
stored certificates across multiple hosts is manageable and can be
handled by a very long list of other pre-existing operational means.


A blueprint [2] was recently proposed to
address this. We discussed this in our team meeting today [3], where we
used an etherpad [4] to collaborate on options that could be used as
alternatives besides the ones offered today. This thread is not intended
to answer how to make Barbican easier to adopt, but rather how to make
Magnum easier to adopt while keeping Barbican as the default
best-practice choice for certificate storage.


I'd like there _NOT_ to be an "easy button" way for operators to hang
themselves in production by following a set of "quick start
instructions" under the guise of "easy to adopt". If Barbican is the
best-practice lets keep it that way. If for some reason Barbican is hard
to adopt lets identify those difficulties and get them fixed. Going down
the path of NIH or alternative less secure solutions because someone
(not identified here or speaking for themselves) has said they don't
want Barbican or deploying it is hard seems like a recipe for
fragmentation and disaster.


Agree.


I want to highlight that the implementation of the spec referenced by
Daneyon Hansen in his quoted response below was completed in the Liberty
release timeframe, and communication between COE components is now
secured using TLS. We are discussing the continued use of TLS for
encrypted connections between COE components, but potentially using
Keystone tokens for authentication between clients and COE’s rather than
using TLS for both encryption and authentication. Further 

Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-22 Thread Kevin Carter
Comments in-line.

On 03/22/2016 02:46 PM, Adrian Otto wrote:
> Team,
>
> This thread is a continuation of a branch of the previous High
> Availability thread [1]. As the Magnum PTL, I’ve been aware of a number
> of different groups who have started using Magnum in recent months. For
> various reasons, there have been multiple requests for information about
> how to turn off the dependency on Barbican, which we use for secure
> storage of TLS certificates that are used to secure communications
> between various components of the software hosted on Magnum Bay
> resources. Examples of this are Docker Swarm, and Kubernetes, which we
> affectionately refer to as COEs (Container Orchestration Engines). The
> only alternative to Barbican currently offered in Magnum is a local file
> option, which is only intended to be used for testing, as the
> certificates are stored unencrypted on a local filesystem where the
> conductor runs, and when you use this option, you can’t scale beyond a
> single conductor.
>
> Although our whole community agrees that using Barbican is the right
> long term solution for deployments of Magnum, we still wish to make the
> friction of adopting Magnum to be as low as possible without completely
> compromising all security best practices.

/me wearing my deployer hat now: Many of my customers and product folks 
want Magnum but they also want magnum to be as secure and stable as 
possible. If Barbican is the best long term solution for the project it 
would make sense to me that Magnum remain on course with Barbican as the 
defacto way of deploying in production. IMHO building alternative means 
for certificate management is a distraction and will only confuse folks 
looking to deploy Magnum into production.

> Some ops teams are willing to
> adopt a new service, but not two. They only want to add Magnum and not
> Barbican.

It would seem to me that once the Barbican dependency is well 
documented, which it should be at this point, Barbican is be easy to 
accept especially with the understanding of why it is needed. Many of 
the deployment projects are creating the automation needed to make the 
adoption of services simpler and I'd imagine deployment automation is 
the largest hurdle to wide spread adoption for both Barbican and Magnum. 
If the OPS team you mention does not want both services it would seem 
they can use "local file" option; this is similar to Cinder+LVM and 
Glance+file_store both of which have scale operational issues in production.

> We think that once those operators become familiar with
> Magnum, adding Barbican will follow. In the mean time, we’d like to
> offer a Barbican alternative that allows Magnum to scale beyond one
> conductor, and allows for encrypted storage of TLC credentials needed
> for unattended bay operations.

If all of this is to simplify or provide for the developer/"someone 
kicking the tires" use case I'd imagine the "local file" storage would 
be sufficient. If the acceptance of Barbican is too much to handle or 
introduce into an active deployment (I'm not sure why that would be 
especially if they're adding Magnum), the synchronization of locally 
stored certificates across multiple hosts is manageable and can be 
handled by a very long list of other pre-existing operational means.

> A blueprint [2] was recently proposed to
> address this. We discussed this in our team meeting today [3], where we
> used an etherpad [4] to collaborate on options that could be used as
> alternatives besides the ones offered today. This thread is not intended
> to answer how to make Barbican easier to adopt, but rather how to make
> Magnum easier to adopt while keeping Barbican as the default
> best-practice choice for certificate storage.

I'd like there _NOT_ to be an "easy button" way for operators to hang 
themselves in production by following a set of "quick start 
instructions" under the guise of "easy to adopt". If Barbican is the 
best-practice lets keep it that way. If for some reason Barbican is hard 
to adopt lets identify those difficulties and get them fixed. Going down 
the path of NIH or alternative less secure solutions because someone 
(not identified here or speaking for themselves) has said they don't 
want Barbican or deploying it is hard seems like a recipe for 
fragmentation and disaster.

> I want to highlight that the implementation of the spec referenced by
> Daneyon Hansen in his quoted response below was completed in the Liberty
> release timeframe, and communication between COE components is now
> secured using TLS. We are discussing the continued use of TLS for
> encrypted connections between COE components, but potentially using
> Keystone tokens for authentication between clients and COE’s rather than
> using TLS for both encryption and authentication. Further notes on this
> are available in the etherpad [4].
>
> I ask that you please review the options under consideration, note your
> remarks in the etherpad [4], and continue 

Re: [openstack-dev] Hummingbird Roadmap

2016-03-22 Thread David Goetz
I don't have the original email but this is in reply to:


Hi all,
I was wondering what the roadmap for Hummingbird is.
Will development continue?  Will support continue?  Is it expected to reach
feature parity or even replace the Python code?

Thank you,
Avishay

Yes- we are in active development of Hummingbird.  It is (mostly) a drop in 
replacement for the python-swift object server but is not currently 100% 
compatible and we are not recommending people using it in their production env. 
For example, it does not support Storage Policies. This however is going to be 
one of the things we work on soon. Long term, I am not sure if it will ever 
replace the Python version. I think this is something we should continue 
talking about- esp at the up coming Openstack Summit in Austin. The overall 
goal of the project (imo at least) is to be fast, simple, easy to deploy, 
failure tolerant, and ultra scalable even at the cost of feature parity with 
swift.  We will be trying to add some more documentation about the project soon 
but everything is still a WIP. Thanks for your interest,

David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-brick][nova][cinder] os-brick/privsep change is done and awaiting your review

2016-03-22 Thread Angus Lees
On Sat, 19 Mar 2016 at 06:27 Matt Riedemann 
wrote:

> I stared pretty hard at the nova rootwrap filter change today [1] and
> tried to keep that in my head along with the devstack change and the
> changes to os-brick (which depend on the devstack/cinder/nova changes).
> And with reading the privsep-helper command code in privsep itself.
>
> I realize this is a bridge to fix the tightly couple lockstep upgrade
> issue between cinder and nova, but it would be super helpful, at least
> for me, to chart out how that nova rootwrap filter change fits into the
> bigger picture, like what calls what and how, where are things used, etc.
>
> I see devstack passing on the os-brick change so I'm inclined to almost
> blindly approve to just keep moving, but I'd feel bad about that. Would
> it be possible to flow chart this out somewhere?
>

Sorry for all the confusion Matt.  I obviously explained it poorly in my
gerrit reply to you and I presume also in the parts of the oslo spec that
you've read, so I'll try another explanation here:

privsep fundamentally involves two processes - the regular (nova, whatever)
unprivileged code, and a companion Unix process running with some sort of
elevated privileges (different uid/gid, extra Linux capabilities,
whatever).  These two processes talk to each other over a Unix socket in
the obvious way.

*Conceptually* the companion privileged process is a fork from the
unprivileged process - in that the python environment (oslo.config, etc)
tries to be as similar as possible and writing code that runs in the
privileged process looks just like python defined in the original process
but with a particular decorator.

privsep has two modes of setting up this split-process-with-ipc-channel
arrangement:
- One is to use a true fork(), which follows the traditional Unix daemon
model of starting with full privileges (from init or whatever) and then
dropping privileges later - this avoids sudo, is more secure (imo), and is
a whole lot simpler in the privsep code, but requires a change to the way
OpenStack services are deployed, and a function call at the top of main()
before dropping privileges.
- The second is to invoke sudo or sudo+rootwrap from the unprivileged
process to run the "privsep-helper" command that you see in this change.
This requires no changes to the way OpenStack services are deployed, so is
the method I'm recommending atm.  (We may never actually use the fork()
method tbh given how slowly things change in OpenStack.)  It is completely
inconsequential whether this uses sudo or sudo+rootwrap - it just affects
whether you need to add a line to sudoers or rootwrap filters.  I chose
rootwrap filter here because I thought we had greater precedent for that
type of change.

So hopefully that makes the overall picture clear:  We need this nova
rootwrap filter addition so privsep-helper can use sudo+rootwrap to become
root, so it can switch to the right set of elevated privileges, so we can
run the relevant privsep-decorated privileged functions in that privileged
environment.

I also have a concern in there about how the privsep-helper rootwrap
> command in nova is only using the os-brick context. What happens if
> os-vif and nova need to share common rootwrap commands? At the midcycle
> Jay and Daniel said there weren't any coming up soon, but if that
> happens, how do we handle it?


privsep is able to have different "privileged contexts", which can each run
as different uids and with different Linux capabilities.  In practice each
context has its own privileged process, and if we're using the
sudo(+rootwrap) and privsep-helper method, then each context will want its
own line in sudoers or rootwrap filters.
It is expected that most OpenStack services would only have one or maybe
two different contexts, but nova may end up with a few more because it has
its fingers in so many different pies.  So yes, we'll want another entry
similar to this one for os-vif - presumably os-vif will want CAP_NET_ADMIN,
whereas os-brick wants various storage device-related capabilities.


Again, I'm disappointed the relevant section of the privsep spec failed to
explain the above sufficiently - if this conversation helps clarify it for
you, *please* suggest some better wording for the spec.  It seems
(understandably!) no-one wants to approve even the smallest self-contained
privsep-related change without understanding the entire overall process, so
I feel like I've had the above conversation about 10 times now.  It would
greatly improve everyone's productivity if we can get the spec (or some new
doc) to a place where it can become the place where people learn about
privsep, and they don't have to wait for me to reply with poorly summarised
versions.

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-22 Thread Adam Young

On 03/22/2016 05:42 PM, Dan Smith wrote:

Shouldn't we be trying to remove central bottlenecks by
decentralizing communications where we can?

I think that's a good goal to continue having. Some deployers have
setup firewalls between compute nodes, or between compute nodes and
the database, so we use the conductor to facilitate communications
between those nodes. But in general we don't want to send all
communications through the conductor.

Yep, I think we generally look forward to having all the resize and
migrate communication coordinated through conductor, but not really for
security reasons specifically. However, I don't think that pumping
everything through conductor for, say, api->compute communication is
something we should do.


So, Api to compute is probably fine as is.  I assume that most of that 
goes in the same queue as the conductor uses.


This assumes that we equally trust conductor and the API server, but I 
think if either is compromised, all bets are off anyway.




As several of us said in IRC yesterday, I'd really like nodes to be able
to authenticate the sender of a message and not do things based on who
sent it and whether that makes sense or not.
I read that as "we want to do HMAC outside of the Queue" and,as I said 
before, we tried that.  No one picked it up, Key distribution is a 
nightmare, and unless you do asymmetric cryptography, you need to have a 
separate shared secret for each reader and writer:  there is no pub-sub 
with symmetric crypto.


And we should not be rolling our own security.


  Adding a bunch of
broker-specific configuration requirements to achieve a security goal
(and thus assuming the queue is never compromised) is not really where I
want to see us go.


Nothing here is broker specific.  The rules are the same for Rabbit, 
QPID and 0MQ.


Message Brokers are a key piece of technology in a lot of enterprise 
software. It is possible to secure them.  Denying the operators the 
ability to secure them because we don't trust the brokers is not fair to 
the operators.




--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newton summit planning etherpad

2016-03-22 Thread Anita Kuno
On 03/22/2016 05:42 PM, Steve Martinelli wrote:
> 
> The summit planning etherpad for Keystone is here:
> https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm
> 
> Please brainstorm / toss ideas up / discuss Newton cycle goals
> 
> Thanks,
> 
> Steve Martinelli
> OpenStack Keystone Project Team Lead
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
You may want to add keystone as a tag in your subject line otherwise you
might get suggestions you aren't expecting.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][ironic] ironic-python-agent 1.0.2 release (liberty)

2016-03-22 Thread no-reply
We are pumped to announce the release of:

ironic-python-agent 1.0.2: Ironic Python Agent Ramdisk

This release is part of the liberty stable release series.

For more details, please see below.

1.0.2
^

Bug Fixes

* This enables virtual media deploy even if virtual floppy device
  name is capitalized to "/dev/disk/by-label/IR-VFD-DEV". see
  https://bugs.launchpad.net/ironic/+bug/1541167 for details.

Changes in ironic-python-agent 1.0.1..1.0.2
---

2acf7a7 determine tgtd ready status through tgtadm
72185e0 Fix vfd mount for capitalized device name

Diffstat (except docs and test files)
-

ironic_python_agent/extensions/iscsi.py| 23 
ironic_python_agent/utils.py   |  9 -
...r-capitalized-device-name-db7f519e900f4e22.yaml |  4 ++
5 files changed, 75 insertions(+), 41 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-22 Thread Ian Cordasco
 

-Original Message-
From: Alan Pevec 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 22, 2016 at 14:21:47
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [release] [pbr] semver on master branches after 
RC WAS Re: How do I calculate the semantic version prior to a release?

> > The release team discussed this at the summit and agreed that it didn't 
> > really matter.  
> The only folks seeing the auto-generated versions are those doing CD from 
> git, and they  
> should not be mixing different branches of a project in a given environment. 
> So I don't  
> think it is strictly necessary to raise the major version, or give pbr the 
> hint to do so.  
>  
> ok, I'll send confused RDO trunk users here :)
> That means until first Newton milestone tag is pushed, master will
> have misleading version. Newton schedule is not defined yet but 1st
> milestone is normally 1 month after Summit, and 2 months from now is
> rather large window.

This affects other OpenStack projects like the OpenStack Ansible project which 
builds from trunk and does periodic upgrades from the latest stable branch to 
whatever is running on master. Further they're using pip and this will 
absolutely cause headaches upgrading that.
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-22 Thread Dan Smith
>> Shouldn't we be trying to remove central bottlenecks by
>> decentralizing communications where we can?
> 
> I think that's a good goal to continue having. Some deployers have
> setup firewalls between compute nodes, or between compute nodes and
> the database, so we use the conductor to facilitate communications
> between those nodes. But in general we don't want to send all
> communications through the conductor.

Yep, I think we generally look forward to having all the resize and
migrate communication coordinated through conductor, but not really for
security reasons specifically. However, I don't think that pumping
everything through conductor for, say, api->compute communication is
something we should do.

As several of us said in IRC yesterday, I'd really like nodes to be able
to authenticate the sender of a message and not do things based on who
sent it and whether that makes sense or not. Adding a bunch of
broker-specific configuration requirements to achieve a security goal
(and thus assuming the queue is never compromised) is not really where I
want to see us go.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Newton summit planning etherpad

2016-03-22 Thread Steve Martinelli

The summit planning etherpad for Keystone is here:
https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm

Please brainstorm / toss ideas up / discuss Newton cycle goals

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-22 Thread Adam Young

On 03/22/2016 09:15 AM, Flavio Percoco wrote:

On 21/03/16 21:43 -0400, Adam Young wrote:

I had a good discussion with the Nova folks in IRC today.

My goal was to understand what could talk to what, and the short 
according to dansmith


" any node in nova land has to be able to talk to the queue for any 
other one for the most part: compute->compute, compute->conductor, 
conductor->compute, api->everything. There might be a few exceptions, 
but not worth it, IMHO, in the current architecture."


Longer conversation is here:
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27 



Right now, the message queue is a nightmare.  All sorts of sensitive 
information flows over the message queue: Tokens (including admin) 
are the most obvious.  Every piece of audit data. All notifications 
and all control messages.


Before we continue down the path of "anything can talk to anything" 
can we please map out what needs to talk to what, and why?  Many of 
the use cases seem to be based on something that should be kicked off 
by the conductor, such as "migrate, resize, live-migrate" and it 
sounds like there are plans to make that happen.


So, let's assume we can get to the point where, if node 1 needs to 
talk to node 2, it will do so only via the conductor.  With that in 
place, we can put an access control rule in place:


I don't think this is going to scale well. Eventually, this will require
evolving the conductor to some sort of message scheduler, which is 
pretty much

what the message bus is supposed to do.


I'll limit this to what happens with Rabbit and QPID (AMQP1.0) and leave 
0 our of it for now.  I'll use rabbit as shorthand for both these, but 
the rules are the same for qpid.




For, say, a migrate operation, the call goes to API, controller, and 
eventually down to one of the compute nodes.  Source? Target?  I don't 
know the code well enough to say, but let's say it is the source.  It 
sends an RPC message to the target node.  The message goes to  the 
central broker right now, and then back down to the targen node.  
Meanwhile, the source node has set up a reply queue and that queue name 
has gone into the message.  The target machine responds  by getting a 
reference to the response queue and sends a message.  This message goes 
up to the broker, and then down to the the source node.


A man in the middle could sit there and also read off the queue. It 
could modify a message, with its own response queue, and happily tranfer 
things back and forth.


So, we have the HMAC proposal, which then puts crypto and key 
distribution all over the place.  Yes, it would guard against a MITM 
attack, but the cost in complexity and processor time it high.



Rabbit does not have a very flexible ACL scheme, bascially, a RegEx per 
Rabbit user.  However, we could easily spin up a new queue for direct 
node to node communication that did meet an ACL regex.  For example, if 
we said that the regex was that the node could only read/write queues 
that have its name in them, to make a request and response queue between 
node-1 and node-2 we could create a queues



node-1-node-2
node-1-node-2--reply


So, instead of a single queue request, there are two.  And conductor 
could tell the target node: start listening on this queue.



Or, we could pass the message through the conductor.  The request 
message goes from node-1 to conductor,  where conductor validates the 
businees logic of the message, then puts it into the message queue for 
node-2.  Responses can then go directly back from node-2 to node-1 the 
way they do now.


OR...we could set up a direct socket between the two nodes, with the 
socket set up info going over the broker.  OR we could use a web 
server,  OR send it over SNMP.  Or SMTP, OR TFTP.  There are many ways 
to get the messages from node to node.


If  we are going to use the message broker to do this, we should at 
least make it possible to secure it, even if it is not the default approach.


It might be possible to use a broker specific technology to optimize 
this, but I am not a Rabbit expert.  Maybe there is some way of 
filtering messages?





1.  Compute nodes can only read from the queue 
compute.-novacompute-.localdomain

2.  Compute nodes can only write to response queues in the RPC vhost
3.  Compute nodes can only write to notification queus in the 
notification host.


I know that with AMQP, we should be able to identify the writer of a 
message.  This means that each compute node should have its own 
user.  I have identified how to do that for Rabbit and QPid.  I 
assume for 0mq is would make sense to use ZAP 
(http://rfc.zeromq.org/spec:27) but I'd rather the 0mq maintainers 
chime in here.




NOTE: Gentle reminder that qpidd has been removed from oslo.messaging.


Yes, but QPID is proton is AMQP1.0 and I did a proof of concept with it 
last summer.  It supports encryption and authentication over GSSAPI and 

Re: [openstack-dev] [nova] Trailing Slashes on URLs

2016-03-22 Thread Shoham Peller
Thank you for your answer Sean. Well understood.
However, I think that if not fix this issue, at least we should supply
guidelines on the matter.

For example tempest's "test_delete_security_group_without_passing_id" test
actually checks this behavior by addressing a security group with an "empty
id" (/v2.1/{tenant_id}/os-security-groups/), and expects nova to return
404. The comment on the code says:
# Negative test:Deletion of a Security Group with out passing ID should
Fail

If we'll release guidelines on the matter, and the right way to use the
API, and that the trailing slashes issue is just a known bug as you say,
this test is rendered useless.

What do you think?

On Tue, Mar 22, 2016 at 7:37 PM, Sean Dague  wrote:
>
> On 03/22/2016 12:34 PM, Shoham Peller wrote:
> > Hi,
> >
> > Nova-api behaves different whether a trailing-slash is given in the URL
> > or not.
> > These 2 requests produce different results:
> >
> >   * GET /v2/{tenant-id}/servers - gets a list of servers
> >   * GET /v2/{tenant-id}/servers/ - gets an info for a server with an
> > "empty id" - produces 404
> >
> > IMHO, a trailing slash in REST requests should be ignored.
> >
> > A few resources about REST and trailing slashes:
> >
http://programmers.stackexchange.com/questions/186959/trailing-slash-in-restful-api
> > https://github.com/nicolaiarocci/eve/issues/118
> >
> > What do you think of this issue?
> > Is this difference in behavior is on purpose?
>
> This is the way python routes works, it's pretty embedded behavior in
> libraries we don't control.
>
> While one could argue they should be equivalent, the docs are pretty
> straight forward about what the correct links are, and we only generate
> working links in our references.
>
> There are many issues that we should address in the API, this one
> probably doesn't crack the top 100.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Does the OpenStack community(or Cinder team) allow one driver to call another driver's public method?

2016-03-22 Thread Jon Bernard
* liuxinguo  wrote:
> Hi Cinder team,
> 
> We are going to implement storage-assisted volume migrate in our
> driver between different backend storage array or even different array
> of different vendors.  This is really high-efficiency than the
> host-copy migration between different array of different vendors.

Could you elaborate more on this?  During a volume migration operation
we give the driver an opportunity to more-intelligently relocate the
volume's data.  This is done through the migrate_volume() method defined
in the driver itself.  If this method exists, it will be called before
falling back to a byte-for-byte copy approach - and if it succeeds the
volume is considered migrated and the operation returns.

Is this what you were looking for, or did you have something different
in mind?

-- 
Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Modules release-notes

2016-03-22 Thread Emilien Macchi
Today during our meeting we discussed about release note management.

In Mitaka, we started to use Reno [1], here is an example with
puppet-openstack_spec_helper [2].
In few months ago, when we decided that we would use reno, we wanted
to educate people using it so this is what we decided to do when a
patch was submitted without release note:

* if the patch is done by a member of Puppet OpenStack core team, -1
is accepted if the release note is missing in the patch. As a team, we
need to show the example.
* if the patch is done by someone outside our core team, instead of
-1, we encourage people to comment on the patch explaining that a
release note is missing, even proposing some help to write it, but
don't block the patch.

I encourage everyone using reno and if you're not familiar with the
tool, you can have a look at [3].
If you need any help, please ping us on IRC #puppet-openstack.

Thanks,

[1] http://docs.openstack.org/developer/reno/
[2] http://docs.openstack.org/releasenotes/puppet-openstack_spec_helper/
[3] http://docs.openstack.org/developer/reno/usage.html

-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-03-22 Thread Dmitry Ilyin
I've started my merging effort here
https://github.com/dmitryilyin/openstack-puppet-pacemaker

Can I change the interface of pcmk_resource?

You have pcmk_constraint but I have pcmk_location/colocation/order
separately. I can merge then into a single resource like you did
or I can keep them separated. Or I can make both. Actually they are
different enough to be separated.

Will I have to develop 'pcs' style provider for every resource? Do we
really need them?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] Switching fuel-library to consume stable/mitaka

2016-03-22 Thread Igor Belikov
Hi,

> Additionally we will want to ensure that the fuel-ci will work for the 
> upstream stable/mitaka branches as it currently[0] isn't working since we 
> haven't cut a stable/mitaka branch for Fuel yet.  Igor Belikov has been 
> notified of the current issues and is looking into a fix.


I rolled out the fix to Fuel CI and retriggered tests for the change, thanks 
for catching this quickly. We don’t plan to tie Fuel CI for puppet-openstack to 
Fuel branching dates, so Fuel CI will test changes to both master and 
stable/mitaka branches of puppet-openstack modules using master of fuel-library 
for now. There still might be some issues with stable/mitaka testing, so feel 
free to poke me or anyone in #fuel-infra and we’ll try to iron this out as 
quickly as possible.

--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com

> On 22 Mar 2016, at 22:42, Alex Schultz  wrote:
> 
> Hey everyone,
> 
> Emilien is in the process of cutting the stable/mitaka branches for all of 
> the upstream puppet modules.  As Fuel approaches SCF, we will want to switch 
> from the master branches we are currently tracking to leverage the 
> stable/mitaka branches.  In talking with some other folks, I believe the plan 
> is the following switch fuel-library to leverage stable/mitaka branches prior 
> to SCF.  Once we open master backup for Newton activities, we will switch the 
> fuel-library master branch back to tracking the upstream master branches 
> while leaving our stable/mitaka branch pointed to the upstream stable/mitaka 
> branches.
> 
> As far as these activities are concerned, I believe Ivan Berezovskiy will be 
> providing a patch to do the switch to stable/mitaka. Additionally we will 
> want to ensure that the fuel-ci will work for the upstream stable/mitaka 
> branches as it currently[0] isn't working since we haven't cut a 
> stable/mitaka branch for Fuel yet.  Igor Belikov has been notified of the 
> current issues and is looking into a fix.
> 
> Please raise any concerns or items that also need to be addressed.
> 
> Thanks,
> -Alex
> 
> [0] https://review.openstack.org/#/c/295976/ 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Puppet OpenStack Community Session in Austin Summit

2016-03-22 Thread Emilien Macchi
Puppet OpenStack group will have a Community Session during the next
Summit in Austin.

This session is great to gather feedback from anyone in OpenStack community:
* developers can ask questions about how to contribute.
* operators / users can give feedback at how modules work on their deployments.

What we would like during this session is getting a maximum of
feedback about Puppet OpenStack modules:
* what would you like us to develop / improve / fix during Newton?
* how do you use the modules? Do you use Hiera? Which version of
Puppet? Which operating system? Which OpenStack version?

If you are interested by this session, or if you can't attend it but
you have some feedback / questions you want us to discuss, please
look:
https://etherpad.openstack.org/p/newton-community-puppet


Puppet OpenStack team will do their best to get feedback and address
it during Newton cycle, your help is really important!

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-22 Thread Adrian Otto
Team,

This thread is a continuation of a branch of the previous High Availability 
thread [1]. As the Magnum PTL, I’ve been aware of a number of different groups 
who have started using Magnum in recent months. For various reasons, there have 
been multiple requests for information about how to turn off the dependency on 
Barbican, which we use for secure storage of TLS certificates that are used to 
secure communications between various components of the software hosted on 
Magnum Bay resources. Examples of this are Docker Swarm, and Kubernetes, which 
we affectionately refer to as COEs (Container Orchestration Engines). The only 
alternative to Barbican currently offered in Magnum is a local file option, 
which is only intended to be used for testing, as the certificates are stored 
unencrypted on a local filesystem where the conductor runs, and when you use 
this option, you can’t scale beyond a single conductor.

Although our whole community agrees that using Barbican is the right long term 
solution for deployments of Magnum, we still wish to make the friction of 
adopting Magnum to be as low as possible without completely compromising all 
security best practices. Some ops teams are willing to adopt a new service, but 
not two. They only want to add Magnum and not Barbican. We think that once 
those operators become familiar with Magnum, adding Barbican will follow. In 
the mean time, we’d like to offer a Barbican alternative that allows Magnum to 
scale beyond one conductor, and allows for encrypted storage of TLC credentials 
needed for unattended bay operations. A blueprint [2] was recently proposed to 
address this. We discussed this in our team meeting today [3], where we used an 
etherpad [4] to collaborate on options that could be used as alternatives 
besides the ones offered today. This thread is not intended to answer how to 
make Barbican easier to adopt, but rather how to make Magnum easier to adopt 
while keeping Barbican as the default best-practice choice for certificate 
storage.

I want to highlight that the implementation of the spec referenced by Daneyon 
Hansen in his quoted response below was completed in the Liberty release 
timeframe, and communication between COE components is now secured using TLS. 
We are discussing the continued use of TLS for encrypted connections between 
COE components, but potentially using Keystone tokens for authentication 
between clients and COE’s rather than using TLS for both encryption and 
authentication. Further notes on this are available in the etherpad [4].

I ask that you please review the options under consideration, note your remarks 
in the etherpad [4], and continue discussion here as needed.

Thanks,

Adrian

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-March/089684.html
[2] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
[3] 
http://eavesdrop.openstack.org/meetings/containers/2016/containers.2016-03-22-16.01.html
[4] https://etherpad.openstack.org/p/magnum-barbican-alternative

On Mar 22, 2016, at 11:52 AM, Daneyon Hansen (danehans) 
> wrote:



From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.
· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.
· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.
· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to 

[openstack-dev] [fuel][puppet] Switching fuel-library to consume stable/mitaka

2016-03-22 Thread Alex Schultz
Hey everyone,

Emilien is in the process of cutting the stable/mitaka branches for all of
the upstream puppet modules.  As Fuel approaches SCF, we will want to
switch from the master branches we are currently tracking to leverage the
stable/mitaka branches.  In talking with some other folks, I believe the
plan is the following switch fuel-library to leverage stable/mitaka
branches prior to SCF.  Once we open master backup for Newton activities,
we will switch the fuel-library master branch back to tracking the upstream
master branches while leaving our stable/mitaka branch pointed to the
upstream stable/mitaka branches.

As far as these activities are concerned, I believe Ivan Berezovskiy will
be providing a patch to do the switch to stable/mitaka. Additionally we
will want to ensure that the fuel-ci will work for the upstream
stable/mitaka branches as it currently[0] isn't working since we haven't
cut a stable/mitaka branch for Fuel yet.  Igor Belikov has been notified of
the current issues and is looking into a fix.

Please raise any concerns or items that also need to be addressed.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/295976/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project] What are specifications?

2016-03-22 Thread Mike Perez
Hey all,

Last Cross-Project meeting [1] I feel like there was some confusion around what
our specifications should be used for.

This mainly came up because of the rolling upgrade specification [2].

I want to clarify that this document exists as an incomplete specification that
was contributed by the product working group. The product working group's goal
is to capture use cases. Some of the use cases were seen as just a wish list,
which is fair. Originally I was hoping what the working group captured would
reflect what different projects are already doing today, not new proposals.

I think this document needs to be updated to be more accurate of that.

My takeaway from the discussion in the meeting of the people that are experts
in this area is there is no one magic solution to this problem. However, if we
want to capture best practices with things like RPC versioning and other
problems a project might want to solve in this area, that's fine.

Best practices is nothing new in the cross-project specifications [3] repo.
Same goes for guidelines [4]. What's confusing and what fungi has brought to my
attention is the word "specification".

A proposal I would like to make:

* Clarify what cross-project specifications are [5].
  - I do think specifications and best practices need to exist.
+ Specifications feel like they're required in my opinion. I think some
  cross-project things like service catalog we want under here.
+ Best practices can be for rolling upgrades. While it's great if a project
  can do rolling upgrades and use our existing solutions, according to some
  experts there is no silver bullet to this problem.

I'm not sure why we need guidelines as well.

What do others think?

[1] - 
http://eavesdrop.openstack.org/meetings/crossproject/2016/crossproject.2016-03-15-21.01.log.html#l-259
[2] - https://review.openstack.org/#/c/290977/
[3] - 
http://specs.openstack.org/openstack/openstack-specs/specs/eventlet-best-practices.html
[4] - 
http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html
[5] - https://review.openstack.org/#/c/295940/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] High Availability

2016-03-22 Thread Douglas Mendizábal
Hi Daneyon,

We have been working on deploying a highly-available Barbican at
Rackspace for a while now.  We just recently made it publicly available
through an early access program:

http://go.rackspace.com/cloud-keep.html

We don't have a full deployment of Barbican yet.  Our early access
deployment does not include a Rabbit queue or barbican-worker processes,
for example.  This means that we don't yet have the ability to process
/orders requests but we do support secret storage backed by Safenet Luna
SA HSMs via the PCKS#11 Cryptographic Plugin.

Our goal is to be able to provide 99.95% availability with a minimum
throughput of 100 req/sec once we move to Unlimited Availability later
this year, but we still have some work to get us there.

To give you a better idea of what our deployment looks like, here's what
we have in production today:

On the front end we have two sets of VM pairs running haproxy [1] and
keepalived [2] using a shared IP address per set.  The two sets
represent the blue and green node sets for blue-green zero-downtime
deployments. [3]  Our DNS entry is pointed to the shared IP of the green
lb pair.  The blue lb set is only accessible from our control plane, and
is used for functional testing of code before being promoted to green.

At any given time only one VM in each lb set is working and the other is
a hot standby that keepalived can instantly promote if needed while
keeping the same IP address.  This gives us the ability to fail over
haproxy faster than DNS can propagate.

Requests are then load-balanced to at least two "API Nodes".  These are
VMs set up as docker hosts.  We are running Repose [4], the barbican-api
process and plight [5] each inside their own container. Repose is used
for rate-limiting, token-validation, and access control. Plight is used
to designate an API node as either blue or green.  Each haproxy set is
configured to route only to the api nodes that match its color, however
they constantly query all nodes for blue/green status (more on this later).

For data storage we are running a MariaDB Galera Cluster [6] in
multi-master mode with 3x VM nodes.  The cluster sits behind yet another
haproxy+keepalived pair, so that our db connections are load-balanced to
all three masters.  This was mainly driven by our decision to host our
control plane our public cloud, since the multi-master setup gives us
better fault tolerance in the likely event of losing one of the DB
nodes.  Previous to this cloud-based deployment we were using PostgreSQL
in a master+slave configuration, but we didn't have a good solution for
fully automatic failovers.

Choosing the right Cryptographic Plugin/Backend is probably going to be
the hardest part about planning for a highly-available deployment.  For
our deployments we are using pairs of Luna SA HSMs in HA mode. [7] This
is currently our bottleneck, and for Newton we plan to focus most of our
development effort in improving the performance of the PKCS#11 Plugin.
Originally we wanted to store one key per project in the HSM itself.
However, we found out early on that the amount storage in the Lunas is
very limited, and completely inadequate for the scale we want to operate
at.  This led to the development of the pkek-wrapping model that the
PKCS#11 plugin is currently using.  This came with the cost of having to
make more hops to the HSM for a single transaction.

The KMIP Plugin does not use the pkek-wrapping model, and as such is
limited by the amount of storage available in the KMIP device that is
used.  Note that when deploying Barbican with the KMIP Plugin, the
database capacity is not relevant.

I'm not super familiar with DogTag, so I can't speak to the limitations
of choosing the DogTag Plugin.

Lastly, since our Lunas are racked in a dedicated environment, we have
physical firewalls (F5s) in front of them.  The barbican-api containers
in the api nodes connect to the HSMs over a VPN tunnel from our public
cloud environment to the dedicated environment.

We have two identical environments right now (staging and production),
and we will be adding more production environments in other data centers
later this year.

We deploy new code often, and production usually runs only a week or two
behind the barbican master branch.

For zero-downtime deployments, we've asked our community to stagger
database schema changes across separate commits.  The idea is that the
schema change should be introduced first in a separate commit.  This
ensures that the current codebase can continue to operate with the new
schema.  The actual code changes are made in a follow-up patch.

When we prepare to deploy, we first update the database schema.  This is
the only potentially disruptive operation we currently have.  In theory
the existing api nodes continue to function with the new schema.  We
then build up new blue API nodes with the new code to be rolled out.
All the new nodes are accessible through our blue lb, and this is where
we run our test suite to 

Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-22 Thread Alan Pevec
> The release team discussed this at the summit and agreed that it didn't 
> really matter. The only folks seeing the auto-generated versions are those 
> doing CD from git, and they should not be mixing different branches of a 
> project in a given environment. So I don't think it is strictly necessary to 
> raise the major version, or give pbr the hint to do so.

ok, I'll send confused RDO trunk users here :)
That means until first Newton milestone tag is pushed, master will
have misleading version. Newton schedule is not defined yet but 1st
milestone is normally 1 month after Summit, and 2 months from now is
rather large window.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Adrian Otto
Team,

Time to close down this thread and start a new one. I’m going to change the 
subject line, and start with a summary. Please restrict further discussion on 
this thread to the subject of High Availability.

Thanks,

Adrian

On Mar 22, 2016, at 11:52 AM, Daneyon Hansen (danehans) 
> wrote:



From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.
· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.
· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.
· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:
· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.
· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I believe this is a sensible option that addresses the original problem 
statement in [1]:

"Magnum currently controls Kubernetes API services using unauthenticated HTTP. 
If an attacker knows the api_address of a Kubernetes Bay, (s)he can control the 
cluster without any access control."

The [1] problem statement is authenticating the bay API endpoint, not 
encrypting it. With the option you propose, we can leave the existing 
tls-disabled attribute alone and continue supporting encryption. Using Keystone 
to authenticate the Kubernetes API already exists outside of Magnum in 
Hypernetes [2]. We will need to investigate support for the other coe types.

[1] https://github.com/openstack/magnum/blob/master/specs/tls-support-magnum.rst
[2] http://thenewstack.io/hypernetes-brings-multi-tenancy-microservices/



I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and 

[openstack-dev] [cross-project] Meeting SKIPPED, Tue March 22th, 21:00 UTC

2016-03-22 Thread Mike Perez
Hi all!

We will not be having a cross-project meeting today, since there are no
proposed agenda items [1].

All cross-project spec liaisons [2] should be ready to discuss the following
items possibly next week:

* Add centralized configuration options specification [3]
* What are cross-project specifications? [4]


[1] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda
[2] - 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons
[3] - https://review.openstack.org/#/c/295543/2
[4] - https://review.openstack.org/#/c/295940/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Daneyon Hansen (danehans)


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.

· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.

· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.

· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:

· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.

· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I believe this is a sensible option that addresses the original problem 
statement in [1]:

"Magnum currently controls Kubernetes API services using unauthenticated HTTP. 
If an attacker knows the api_address of a Kubernetes Bay, (s)he can control the 
cluster without any access control."

The [1] problem statement is authenticating the bay API endpoint, not 
encrypting it. With the option you propose, we can leave the existing 
tls-disabled attribute alone and continue supporting encryption. Using Keystone 
to authenticate the Kubernetes API already exists outside of Magnum in 
Hypernetes [2]. We will need to investigate support for the other coe types.

[1] https://github.com/openstack/magnum/blob/master/specs/tls-support-magnum.rst
[2] http://thenewstack.io/hypernetes-brings-multi-tenancy-microservices/



I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often 

Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-22 Thread Doug Hellmann


> On Mar 22, 2016, at 2:10 PM, Alan Pevec  wrote:
> 
> 2016-02-26 19:51 GMT+01:00 Robert Collins :
>>> On 27 February 2016 at 00:13, Neil Jerram  
>>> wrote:
>>> I understand the semantic versioning algorithm for calculating a new
>>> version.  But what do I run, in a git repository, to do that calculation
>>> for me, and output:
>> 
>> pbr does that automatically, generating a pre-release version. The
>> regular part of the version is the lowest version number you can use
>> that will match the semver rules.
> 
> So to help PBR, all projects should be inserting commit with Sem-Ver:
> api-break after stable/mitaka was cut, without that we have the same
> major version on both branches e.g. nova:
> 
> $ git checkout master
> $ python ./setup.py --version
> 13.0.0.0rc2.dev48
> 
> $ git checkout stable/mitaka
> $ python ./setup.py --version
> 13.0.0.0rc2.dev10
> 
> After pushing empty commit to the master with the message like:
>Newton bump major version
> 
>Sem-Ver: api-break
>Change-Id: I8a2270d5a0f45342fe418b3018f31e6ef054fe9e
> 
> $ python ./setup.py --version
> 14.0.0.dev49
> 
> Any reason not to do that? Other option is to push alpha tag to
> master, but that would be weird IMHO.

The release team discussed this at the summit and agreed that it didn't really 
matter. The only folks seeing the auto-generated versions are those doing CD 
from git, and they should not be mixing different branches of a project in a 
given environment. So I don't think it is strictly necessary to raise the major 
version, or give pbr the hint to do so. 

Doug

> 
> 
> Cheers,
> Alan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-22 Thread Alan Pevec
2016-02-26 19:51 GMT+01:00 Robert Collins :
> On 27 February 2016 at 00:13, Neil Jerram  wrote:
>> I understand the semantic versioning algorithm for calculating a new
>> version.  But what do I run, in a git repository, to do that calculation
>> for me, and output:
>
> pbr does that automatically, generating a pre-release version. The
> regular part of the version is the lowest version number you can use
> that will match the semver rules.

So to help PBR, all projects should be inserting commit with Sem-Ver:
api-break after stable/mitaka was cut, without that we have the same
major version on both branches e.g. nova:

$ git checkout master
$ python ./setup.py --version
13.0.0.0rc2.dev48

$ git checkout stable/mitaka
$ python ./setup.py --version
13.0.0.0rc2.dev10

After pushing empty commit to the master with the message like:
Newton bump major version

Sem-Ver: api-break
Change-Id: I8a2270d5a0f45342fe418b3018f31e6ef054fe9e

$ python ./setup.py --version
14.0.0.dev49

Any reason not to do that? Other option is to push alpha tag to
master, but that would be weird IMHO.


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][deb][kolla][rpm] Introduce a new tag for type:packaging

2016-03-22 Thread Steven Dake (stdake)
Technical Committee,

Thierry in this thread [1] suggested we need a type:packaging tag.  Please 
accept my proposal for this work in this review [2].

Thanks!
-steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-March/090096.html
[2] https://review.openstack.org/295972
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Trailing Slashes on URLs

2016-03-22 Thread Sean Dague
On 03/22/2016 12:34 PM, Shoham Peller wrote:
> Hi,
> 
> Nova-api behaves different whether a trailing-slash is given in the URL
> or not.
> These 2 requests produce different results:
> 
>   * GET /v2/{tenant-id}/servers - gets a list of servers
>   * GET /v2/{tenant-id}/servers/ - gets an info for a server with an
> "empty id" - produces 404
> 
> IMHO, a trailing slash in REST requests should be ignored.
> 
> A few resources about REST and trailing slashes:
> http://programmers.stackexchange.com/questions/186959/trailing-slash-in-restful-api
> https://github.com/nicolaiarocci/eve/issues/118
> 
> What do you think of this issue?
> Is this difference in behavior is on purpose?

This is the way python routes works, it's pretty embedded behavior in
libraries we don't control.

While one could argue they should be equivalent, the docs are pretty
straight forward about what the correct links are, and we only generate
working links in our references.

There are many issues that we should address in the API, this one
probably doesn't crack the top 100.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Managing bug backports to Mitaka branch

2016-03-22 Thread Steven Dake (stdake)
Hi folks,

Thierry (ttx in the irc log at [1]) proposed the standard way projects 
typically handle backports of newton fixes that should be fixed in an rc, while 
also maintaining the information in our rc2/rc3 trackers.

Here is an example bug with the process applied:
https://bugs.launchpad.net/kolla/+bug/1540234

To apply this process, the following happens:

  1.  Any individual may propose a newton bug for backport potential by 
specifying the tag 'rc-backport-potential" in the Newton 1 milestone.
  2.  Core reviewers review the rc-backport-potential bugs.
 *   CR's review [3] on a daily basis for new rc backport candidates.
 *   If the core reviewer thinks the bug should be backported to 
stable/mitaka, (or belongs in the rc), they use the Target to series button, 
select mitaka, save.
 *copy the state of the bug, but set thte Mitaka milestone target to 
"mitaka-rc2".
 *   Finally they remove the rc-backport-potential tag from the bug, so it 
isn't re-reviwed.

The purpose of this proposal is to do the following:

  1.  Allow the core reviewer team to keep track of bugs needing attention for 
the release candidates in [2] by looking at [3].
  2.  Allow master development to proceed un-impeded.
  3.  Not single thread on any individual for backporting.

I'd like further discussion on this proposal at our Wednesday meeting, so I've 
blocked off a 20 minute timebox for this topic.  I'd like wide agreement from 
the core reviewers to follow this best practice, or alternately lets come up 
with a plan b :)

If your a core reviewer and won't be able to make our next meeting, please 
respond on this thread with your  thoughts.  Lets also not apply the process 
until the conclusion of the discussion at Wednesday's meeting.

Regards,
-steve

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2016-03-22.log.html#t2016-03-22T16:23:11
[2] https://launchpad.net/kolla/+milestone/mitaka-rc2
[3] https://bugs.launchpad.net/kolla/+bugs?field.tag=rc-backport-potential

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Ian Cordasco
 

-Original Message-
From: Hongbin Lu 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 21, 2016 at 22:22:01
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [magnum] High Availability

> Tim,
>  
> Thanks for your advice. I respect your point of view and we will definitely 
> encourage  
> our users to try Barbican if they see fits. However, for the sake of Magnum, 
> I think we have  
> to decouple from Barbican at current stage. The coupling of Magnum and 
> Barbican will  
> increase the size of the system by two (1 project -> 2 project), which will 
> significant  
> increase the overall complexities.

Hi Hongbin,

I think you're missing the fact that Tim represents a very large and very 
visible user of OpenStack, CERN.

> · For developers, it incurs significant overheads on development, quality 
> assurance,  
> and maintenance.

Are you sure? It seems like barbican isn't a common problem among developers. 
It's more of a problem for operators because the dependency is very poorly 
documented.

> · For operators, it doubles the amount of efforts of deploying and monitoring 
> the system.  

This makes operators sound a bit ... primitive in how they deploy things. That 
seems quite unfair. CERN is using Puppet-OpenStack which might need help to 
improve it's Magnum and Barbican puppet modules, but I doubt this is a big 
concern for them. People using the new ansible roles will notice similar gaps 
given their age, but these playbooks transparently provide everything to deploy 
and monitor the system. It's no more difficult for them to deploy both magnum 
and barbican than it is to deploy one or the other. I'm sure the Chef OpenStack 
efforts are also similarly easy to add to an existing OpenStack deployment.

The only people who might have problems with deploying the two in conjunction 
are people following the install guide and using system packages *only* without 
automation. I think it's also fair to say that this group of people are not 
your majority of operators. Further, given the lack of install guide content 
for magnum, I find it doubtful people are performing magnum installs by hand 
like this.

Do you have real operator feedback complaining about this or is this a concern 
you're anticipating?

> · For users, a large system is likely to be unstable and fragile which 
> affects the user  
> experience.
> In my point of view, I would like to minimize the system we are going to 
> ship, so that we can  
> reduce the overheads of maintenance and provides a stable system to our users.

Except you are only shipping Magnum and the Barbican team is shipping Barbican. 
OpenStack is less stable because it has separate services for the core compute 
portion. Further, Nova should apparently have its own way of accepting uploads 
for and managing images as well as block storage management because depending 
on Glance and Cinder for that is introducing fragility and *potential* 
instability.

OpenStack relies on other services and their teams of subject matter experts 
for good reason. It's because no service should manage every last thing itself 
when another service exists that can and is doing that in a better manner.

> I noticed that there are several suggestions to “force” our users to install 
> Barbican,  
> which I would respectfully disagree. Magnum is a young project and we are 
> struggling  
> to increase the adoption rate. I think we need to be nice to our users, 
> otherwise, they  
> will choose our competitors (there are container service everywhere). Please 
> understand  
> that we are not a mature project, like Nova, who has thousands of users. We 
> really don’t  
> have the power to force our users to do what they don’t like to do.

Why are you attributing all of your adoption issues to needing Barbican? One 
initial barrier to my evaluation of Magnum was its lack of documentation that 
is geared towards operators at all. The next barrier was the client claiming it 
supported Keystone V3 and not actually doing so (which was admittedly easily 
fixed). Putting all the blame on Barbican is a bit bizarre from my point of 
view as someone who has and is deploying Magnum.

> I also recognized there are several disagreements from the Barbican team. Per 
> my understanding,  
> most of the complaints are about the re-invention of Barbican equivalent 
> functionality  
> in Magnum. To address that, I am going to propose an idea to achieve the goal 
> without duplicating  
> Barbican. In particular, I suggest to add support for additional 
> authentication system  
> (Keystone in particular) for our Kubernetes bay (potentially for 
> swarm/mesos). As  
> a result, users can specify how to secure their bay’s API endpoint:
>  
> · TLS: This option requires Barbican to be installed for storing the TLS 
> 

Re: [openstack-dev] [tripleo] becoming third party CI (was: enabling third party CI)

2016-03-22 Thread Dan Prince
On Thu, 2016-03-10 at 23:24 +, Jeremy Stanley wrote:
> On 2016-03-10 16:09:44 -0500 (-0500), Dan Prince wrote:
> > 
> > This seems to be the week people want to pile it on TripleO.
> > Talking
> > about upstream is great but I suppose I'd rather debate major
> > changes
> > after we branch Mitaka. :/
> [...]
> 
> I didn't mean to pile on TripleO, nor did I intend to imply this was
> something which should happen ASAP (or even necessarily at all), but
> I do want to better understand what actual benefit is currently
> derived from this implementation vs. a more typical third-party CI
> (which lots of projects are doing when they find their testing needs
> are not met by the constraints of our generic test infrastructure).
> 
> > 
> > With regards to Jenkins restarts I think it is understood that our
> > job
> > times are long. How often do you find infra needs to restart
> > Jenkins?
> We're restarting all 8 of our production Jenkins masters weekly at a
> minimum, but generally more often when things are busy (2-3 times a
> week). For many months we've been struggling with a thread leak for
> which their development team has not seen as a priority to even
> triage our bug report effectively. At this point I think we've
> mostly given up on expecting it to be solved by anything other than
> our upcoming migration off of Jenkins, but that's another topic
> altogether.
> 
> > 
> > And regardless of that what if we just said we didn't mind the
> > destructiveness of losing a few jobs now and then (until our job
> > times are under the line... say 1.5 hours or so). To be clear I'd
> > be fine with infra pulling the rug on running jobs if this is the
> > root cause of the long running jobs in TripleO.
> For manual Jenkins restarts this is probably doable (if additional
> hassle), but I don't know whether that's something we can easily
> shoehorn into our orchestrated/automated restarts.
> 
> > 
> > I think the "benefits are minimal" is bit of an overstatement. The
> > initial vision for TripleO CI stands and I would still like to see
> > individual projects entertain the option to use us in their gates.
> [...]
> 
> This is what I'd like to delve deeper into. The current
> implementation isn't providing you with any mechanism to prevent
> changes which fail jobs running in the tripleo-test cloud from
> merging to your repos, is it? You're still having to manually
> inspect the job results posted by it? How is that particularly
> different from relying on third-party CI integration?

Perhaps we don't have a lot of differences today but I don't think that
is where we want to be. Moving TripleO CI into 3rd party CI is IMO
strategically a bad move for the project that aims to provide a
feedback loop for breakages into other upstream OpenStack projects. I
would argue that we are in a unique position to do that in TripleO...
and becoming 3rd party CI is a retreat from providing this feedback
loop which can benefit other projects we rely on heavily (think Heat,
Mistral, Ironic, etc.). We want to gate our stuff. We need to gate our
own stuff.

That said we've overstepped our resource boundaries. Our job runtimes
are way long. We have several efforts in progress to help improve that.

1) Caching. Dereks' work on caching should significantly help us
improve our job wall times:

https://review.openstack.org/#/q/topic:mirror-server

2) metrics tracking. I've posted a patch to help us better track
various wall times and image size's in tripleo-ci:

https://review.openstack.org/#/c/291393/

3) the ability to test components of TripleO outside of baremetal
environments. Steve Hardy has been working on some approaches to
testing tripleo-heat-templates on normal OpenStack cloud instances.
Using this approach would allow us to test a significant portion of our
patches on groups of nodepool instances. Need to prototype this a bit
further but I think this holds some promising for allowing us to split
up our testing scenarios, etc.

So rather than ask why can't TripleO become 3rd party CI I'd ask what
harm are we causing where we are at? I like where we are at because the
management is well know to the team and other OpenStack projects.

And does working on the items above (speeding up our wall time, keeping
better metrics tracking, using more public cloud resource) help make
everyone happier?

Dan

> 
> As for other projects making use of the same jobs, right now the
> only convenience I'm aware of is that they can add check-tripleo
> pipeline jobs in our Zuul layout file instead of having you add it
> to yours (which could itself reside in a Git repo under your
> control, giving you even more flexibility over those choices). In
> fact, with a third-party CI using its own separate Gerrit account,
> you would be able to leave clear -1/+1 votes on check results which
> is not possible with the present solution.
> 
> So anyway, I'm not saying that I definitely believe the third-party
> CI route will be better for TripleO, but I'm 

[openstack-dev] [nova] Trailing Slashes on URLs

2016-03-22 Thread Shoham Peller
Hi,

Nova-api behaves different whether a trailing-slash is given in the URL or
not.
These 2 requests produce different results:

   - GET /v2/{tenant-id}/servers - gets a list of servers
   - GET /v2/{tenant-id}/servers/ - gets an info for a server with an
   "empty id" - produces 404

IMHO, a trailing slash in REST requests should be ignored.

A few resources about REST and trailing slashes:
http://programmers.stackexchange.com/questions/186959/trailing-slash-in-restful-api
https://github.com/nicolaiarocci/eve/issues/118

What do you think of this issue?
Is this difference in behavior is on purpose?

Thanks,
Shoham
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Packaging CI for Fuel

2016-03-22 Thread Jeremy Stanley
On 2016-03-22 15:20:22 + (+), Tristan Cacqueray wrote:
[...]
> With only my VMT hat on, this makes me wonder why the packaging needs
> special care. Is there a reason why stable branch aren't built continuously?
[...]

My concern was more over handling packaging-specific security
vulnerabilities (loose file permissions, risky default
configuration, that sort of thing) which is what I think
"vulnerability management" of packages would entail.

But also all generated packages published from the same suite would
need consistent support coverage not just for the packaging itself
but also for all the software being packaged--sysadmins consuming
our packages aren't going to take the time to figure out which of
packages installed from there are "supported" by us and which
aren't.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing type:deployment

2016-03-22 Thread Dan Prince
On Tue, 2016-03-22 at 15:37 +, Steven Dake (stdake) wrote:
> 
> On 3/22/16, 2:15 AM, "Thierry Carrez"  wrote:
> 
> > 
> > Steven Dake (stdake) wrote:
> > > 
> > > Technical Committee,
> > > 
> > > Please accept my proposal of a new type of project called a
> > > deployment
> > > [1].  If people don¹t like the type name, we can change it.  The
> > > basic
> > > idea is there are a class of projects unrepresented by
> > > type:service and
> > > type:library which are deployment projects including but not
> > > limited to
> > > Fuel, Kolla, OSA, and TripleO.  The main motivation behind this
> > > addition
> > > are:
> > > 
> > >  1. Make it known to all which projects are deployment projects
> > > in the
> > > governance repository.
> > >  2. Provide that information via the governance website under
> > > release
> > > management tags.
> > >  3. Permit deployment projects to take part in the assert tags
> > > relating
> > > to upgrades [2].
> > > 
> > > 
> > > Currently fuel is listed as a type:service in the governance
> > > repository
> > > which is only partially accurate.  It may provide a ReST API, but
> > > during
> > > the Kolla big tent application process, we were told we couldn't
> > > use
> > > type:service as it only applied to daemon services and not
> > > deployment
> > > projects.
> > I agree that type:service is not really a good match for Fuel or
> > Kolla,
> > and we could definitely use something else -- that would make it a
> > lot
> > clearer what is what for the downstream consumers of the software
> > we
> > produce.
> > 
> > One issue is that tags are applied to deliverables, not project
> > teams.
> > For the Fuel team it's pretty clear (it would apply to their "fuel"
> > deliverable). For Kolla team, I suspect it would apply to the
> > "kolla"
> > deliverable. But the TripleO team produces a collection of tools,
> > so
> > it's unclear which of those would be considered the main
> > "deployment"
> > thing.
> For kolla we are considering splitting the repository (to be
> discussed at
> the Kolla midcycle) into our docker packaging efforts and our Ansible
> deployment efforts since the ABI is very stable at this point and we
> don't
> see any requirements for changing the container ABI at present.  What
> this
> would mean is our repositories would be
> 
> Kolla - build docker containers - type:packaging
> Kolla-ansible - deploy Kolla's docker containers - type:deployment
> (and
> type:upgrade in the future once we get a gate up to meet the
> requirements
> and assuming this proposal is voted in by the technical committee).
> 
> In essence Kolla would be affected by this same scenario as TripleO.
> 
> Perhaps the tripleo folks could weigh-in in the review.  I don't want
> the
> tag to be onerous to apply.  I believe tags should be relatively easy
> to
> obtain if the project meets the "spirit of the tag".  That said if
> the
> proposed language could be written to include TripleO's deliverable
> without excluding it, then that is what I'd be after.
> 
> Dan can you weigh in?

I see no harm in adding this extra type:deployment tag to some of the
TripleO deliverables.

+1 from me.

> 
> > 
> > 
> > For OSA, we don't produce the deployment tool, only a set of
> > playbooks.
> > I was thinking we might need a type:packaging tag to describe which
> > things we produce are just about packaging OpenStack things for
> > usage by
> > outside deployment systems (Ansible, Puppet, Chef, Deb, RPM...). So
> > I'm
> > not sure your type:deployment tag would apply to OSA.
> Brain still booting this morning - 8am ftl.  Thinking more clearly on
> this
> point, we could add a requirement that the software produce a
> functional
> out of the box working environment.  This would easily apply to OSA
> and
> possibly even Puppet/Chef efforts.
> 
> A stab at it would be:
> "After deployment is complete, the starter-kit:compute is fully
> operational without further interaction from the Operator."
> 
> Open to language help in the review itself - I'll propose an update
> this
> morning.  I'd like to be inclusive of projects like Puppet and Chef
> and
> obviously OSA which are clearly deployment systems which rely on
> deployment tools like Puppet, Chef, and Ansible respectively.  This
> is the
> same model Kolla follows as well.  Kolla Doesn't reinvent Ansible, we
> just
> use it.
> 
> A type:packaging doesn't really fit though, because Kolla provides a
> completely working out of the box deployment whereas packaging (deb,
> docker, rpm) only package the software for other deployment tools to
> consume.
> 
> Thanks Thierry for the feedback.
> 
> Regards,
> -steve
> 
> 
> > 
> > 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #75

2016-03-22 Thread Emilien Macchi
We did our meeting, you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-03-22-15.00.html

Thanks,

On Mon, Mar 21, 2016 at 10:25 AM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi,
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
>
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
>
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160322
>
> We'll start discussing about the Summit, so feel fee to join if you
> have topic proposals.
>
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.
>
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn][ovn4nfv]

2016-03-22 Thread John McDowall
Gary,

Thanks for replying. I did chat briefly to one of the authors of SFC last week 
and will talk with them more.

I will admit I am coming at the general service insertion problem from a very 
specific use case; easily protecting east-west traffic between applications by 
dynamically inserting  a NGFW as a VNF; so my viewpoint is slightly slanted ;-).

To answer your specific questions:

  1.  I think the Service Chaining/Insertion API will work for this effort too 
as the concept of port-pairs fits well with what I have done. As the API I have 
created is just "syntactical sugar” changing it is not a big deal. The two 
issues I see are 1) the classifier, as the firewall is a (DPI) classifier this 
step may not be necessary or it could act as a pre-filter, and 2) the ability 
to steer traffic to a specific application through the VNF. In general though I 
think we could make it work.
  2.  There has to be some changes at the networking layer to steer traffic 
into new paths defined by the API, and as Russell points out the majority of 
the work is in OVN. The changes to Open vSwitch are only in the ovn-nb layer 
and are additive, i.e. They do not change the current behavior only layer on 
top. In Openstack I have tried to isolate the changes to follow the neutron 
plugin model. Is there a better way to do it? If OVN had a plugin model would 
that help?

Regards

John




[Palo Alto Networks Ignite 2016]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing type:deployment

2016-03-22 Thread Steven Dake (stdake)


On 3/22/16, 2:15 AM, "Thierry Carrez"  wrote:

>Steven Dake (stdake) wrote:
>> Technical Committee,
>>
>> Please accept my proposal of a new type of project called a deployment
>> [1].  If people don¹t like the type name, we can change it.  The basic
>> idea is there are a class of projects unrepresented by type:service and
>> type:library which are deployment projects including but not limited to
>> Fuel, Kolla, OSA, and TripleO.  The main motivation behind this addition
>> are:
>>
>>  1. Make it known to all which projects are deployment projects in the
>> governance repository.
>>  2. Provide that information via the governance website under release
>> management tags.
>>  3. Permit deployment projects to take part in the assert tags relating
>> to upgrades [2].
>>
>>
>> Currently fuel is listed as a type:service in the governance repository
>> which is only partially accurate.  It may provide a ReST API, but during
>> the Kolla big tent application process, we were told we couldn't use
>> type:service as it only applied to daemon services and not deployment
>> projects.
>
>I agree that type:service is not really a good match for Fuel or Kolla,
>and we could definitely use something else -- that would make it a lot
>clearer what is what for the downstream consumers of the software we
>produce.
>
>One issue is that tags are applied to deliverables, not project teams.
>For the Fuel team it's pretty clear (it would apply to their "fuel"
>deliverable). For Kolla team, I suspect it would apply to the "kolla"
>deliverable. But the TripleO team produces a collection of tools, so
>it's unclear which of those would be considered the main "deployment"
>thing.

For kolla we are considering splitting the repository (to be discussed at
the Kolla midcycle) into our docker packaging efforts and our Ansible
deployment efforts since the ABI is very stable at this point and we don't
see any requirements for changing the container ABI at present.  What this
would mean is our repositories would be

Kolla - build docker containers - type:packaging
Kolla-ansible - deploy Kolla's docker containers - type:deployment (and
type:upgrade in the future once we get a gate up to meet the requirements
and assuming this proposal is voted in by the technical committee).

In essence Kolla would be affected by this same scenario as TripleO.

Perhaps the tripleo folks could weigh-in in the review.  I don't want the
tag to be onerous to apply.  I believe tags should be relatively easy to
obtain if the project meets the "spirit of the tag".  That said if the
proposed language could be written to include TripleO's deliverable
without excluding it, then that is what I'd be after.

Dan can you weigh in?

>
>For OSA, we don't produce the deployment tool, only a set of playbooks.
>I was thinking we might need a type:packaging tag to describe which
>things we produce are just about packaging OpenStack things for usage by
>outside deployment systems (Ansible, Puppet, Chef, Deb, RPM...). So I'm
>not sure your type:deployment tag would apply to OSA.

Brain still booting this morning - 8am ftl.  Thinking more clearly on this
point, we could add a requirement that the software produce a functional
out of the box working environment.  This would easily apply to OSA and
possibly even Puppet/Chef efforts.

A stab at it would be:
"After deployment is complete, the starter-kit:compute is fully
operational without further interaction from the Operator."

Open to language help in the review itself - I'll propose an update this
morning.  I'd like to be inclusive of projects like Puppet and Chef and
obviously OSA which are clearly deployment systems which rely on
deployment tools like Puppet, Chef, and Ansible respectively.  This is the
same model Kolla follows as well.  Kolla Doesn't reinvent Ansible, we just
use it.

A type:packaging doesn't really fit though, because Kolla provides a
completely working out of the box deployment whereas packaging (deb,
docker, rpm) only package the software for other deployment tools to
consume.

Thanks Thierry for the feedback.

Regards,
-steve


>
>-- 
>Thierry Carrez (ttx)
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing type:deployment

2016-03-22 Thread Steven Dake (stdake)


From: Jesse Pretorius 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, March 22, 2016 at 7:40 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing 
type:deployment

On 22 March 2016 at 09:15, Thierry Carrez 
> wrote:

For OSA, we don't produce the deployment tool, only a set of playbooks. I was 
thinking we might need a type:packaging tag to describe which things we produce 
are just about packaging OpenStack things for usage by outside deployment 
systems (Ansible, Puppet, Chef, Deb, RPM...). So I'm not sure your 
type:deployment tag would apply to OSA.

Yeah, I suppose it depends on how you define 'deployment tool'. OSA is an 
umbrella project providing Ansible roles which deploy services, and playbooks 
that put them together in an integrated deployment.

Fuel similarly has libraries, Puppet roles, plugins, etc which are all packaged 
together to provide what we call 'Fuel'.

I expect that there are other similarities - for instance 'Keystone' may be a 
service, but that service has libraries and all combined together we call it a 
daemon service.

I guess it would be nice to have some sort of designation to allow easier 
filtering for consumers, assuming that this actually does add value to 
Operators/Packagers who consume these projects.

Jessie,

The only requirement is:

  *
The repository contains software that deploys at minimum
  deliverables tagged with starter-kit:compute in the
  projects.yaml file.

I guess we could add more if needed, but I'm a big fan of less is more, so I'd  
be open to adding requirements if the above is unclear that the tool 
(puppet/chef/osa/kolla/fuel/triploe0) needs to deploy OpenStack and it needs to 
be functional afterwards.

I think the "functional afterwards" is unstated and probably needs an update to 
the patch to differentiate between packaging efforts and deployment efforts.

I also think the project should deploy the dependencies required to operate 
start-kit:compute which include a database of their choosing and a message 
queue service supported by oslo.

Note compute-kit is not onerous - there are only a few projects which have the 
starter-kit:compute tag.  They include keystone, glance, neutron, and nova.  
Clearly that could change in the future, but at present, it wouldn't be a 
burden on any deployment project to just simply apply the tag and move on.

Thanks for jogging my thought processes - I'll update the review this morning.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Packaging CI for Fuel

2016-03-22 Thread Tristan Cacqueray
On 03/19/2016 06:53 PM, Jeremy Stanley wrote:
> On 2016-03-19 05:10:18 -0500 (-0500), Monty Taylor wrote:
> [...]
>> It would also be good to tie off with the security team about
>> this. One of the reasons we stopped publishing debs years ago is
>> that it made us a de-facto derivative distro. People were using
>> our packages in production, including backports we'd built in
>> support of those packages, but our backports were not receiving
>> security/CVE attention, so we were concerned that we were causing
>> people to be exposed to issues. Of course. "we" was thierry,
>> soren, jeblair and I, which is clearly not enough people. Now we
>> have a whole security team and people who DO track CVEs - so if
>> they're willing to at least keep an eye on things we publish in a
>> repo, then I think we're in good shape to publish a repo with
>> backports in it.
> [...]
> 
> Please be aware that the VMT's direct support for triaging, tracking
> and announcing vulnerabilities/fixes only extends to a very small
> subset of OpenStack already. With both my VMT and Infra hats on, I
> really don't feel like we have either the workforce nor expertise to
> make security guarantees about our auto-built packages. We'll make a
> best effort attempt to rebuild packages as soon as possible after
> patches merge to their corresponding repos, assuming the toolchain
> and our CI are having a good day.
> 

With only my VMT hat on, this makes me wonder why the packaging needs
special care. Is there a reason why stable branch aren't built continuously?

Otherwise I agree with Jeremy, VMT is already quite busy supporting
vulnerability:managed projects' master branch along with supported
stable branch. Adding more branches to track doesn't seem like the right
approach.

-Tristan

> I'm not against building and publishing packages, but we need to
> make big ugly disclaimers everywhere we can that these are not
> security supported by us, not intended for production use, and if
> they break your deployment you get to keep all the pieces. Users of
> legitimate distros need to consider those packages superior to ours
> in every way, since I really don't want to be on the hook to support
> them for more than validation purposes.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing type:deployment

2016-03-22 Thread Steven Dake (stdake)


On 3/22/16, 2:15 AM, "Thierry Carrez"  wrote:

>Steven Dake (stdake) wrote:
>> Technical Committee,
>>
>> Please accept my proposal of a new type of project called a deployment
>> [1].  If people don¹t like the type name, we can change it.  The basic
>> idea is there are a class of projects unrepresented by type:service and
>> type:library which are deployment projects including but not limited to
>> Fuel, Kolla, OSA, and TripleO.  The main motivation behind this addition
>> are:
>>
>>  1. Make it known to all which projects are deployment projects in the
>> governance repository.
>>  2. Provide that information via the governance website under release
>> management tags.
>>  3. Permit deployment projects to take part in the assert tags relating
>> to upgrades [2].
>>
>>
>> Currently fuel is listed as a type:service in the governance repository
>> which is only partially accurate.  It may provide a ReST API, but during
>> the Kolla big tent application process, we were told we couldn't use
>> type:service as it only applied to daemon services and not deployment
>> projects.
>
>I agree that type:service is not really a good match for Fuel or Kolla,
>and we could definitely use something else -- that would make it a lot
>clearer what is what for the downstream consumers of the software we
>produce.
>
>One issue is that tags are applied to deliverables, not project teams.
>For the Fuel team it's pretty clear (it would apply to their "fuel"
>deliverable). For Kolla team, I suspect it would apply to the "kolla"
>deliverable. But the TripleO team produces a collection of tools, so
>it's unclear which of those would be considered the main "deployment"
>thing.
>
>For OSA, we don't produce the deployment tool, only a set of playbooks.
>I was thinking we might need a type:packaging tag to describe which
>things we produce are just about packaging OpenStack things for usage by
>outside deployment systems (Ansible, Puppet, Chef, Deb, RPM...). So I'm
>not sure your type:deployment tag would apply to OSA.

Thierry,

I was focused on Kolla when I proposed the type:deployment tag.  I can
also add a type:packaging tag as well if the community would find that
helpful for deb/rpm packaging efforts.  I agree type:packaging doesn't fit
with deployment tools in general.  I hadn't considered Puppet and Chef in
my original proposal, but I think they do fit into the type:deployment
tag, because they deploy the compute-kit.

Relating to OSA, OSA produces full playbooks and other tools for
deployment, so I think it is more a deployment system then a packaging
system.
>
>-- 
>Thierry Carrez (ttx)
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-22 Thread Assaf Muller
On Tue, Mar 22, 2016 at 9:31 AM, Kevin Benton  wrote:
> Thanks for doing this. I dug into the test_volume_boot_pattern test to see
> what was going on.
>
> On the first boot, Nova called Neutron to create a port at 23:29:44 and it
> took 441ms to return the port to Nova.[1]
> Nova then plugged the interface for that port into OVS a little over 6
> seconds later at 23:29:50.[2]
> The Neutron agent attempted to process this on the iteration at 23:29:52
> [3]; however, it didn't get the ofport populated from the OVSDB monitor... a
> bug![4] The Neutron agent did catch it on the next iteration two seconds
> later on a retry and notified the Neutron server at 23:29:54.[5]

Good work as usual Kevin, just approved the fix to this bug.

> The Neutron server processed the port ACTIVE change in just under 80ms[6],
> but it did not dispatch the notification to Nova until 2 seconds later at
> 23:29:56 [7] due to the Nova notification batching mechanism[8].
>
> Total time between port create and boot is about 12 seconds. 6 in Nova and 6
> in Neutron.
>
> For the Neutron side, the bug fix should eliminate 2 seconds. We could
> probably make the Nova notifier batching mechanism a little more aggressive
> so it only batches up calls in a very short interval rather than making 2
> second buckets at all times. The remaining 2 seconds is just the agent
> processing loop interval, which can be tuned with a config but it should be
> okay if that's the only bottleneck.
>
> For Nova, we need to improve that 6 seconds after it has created the Neutron
> port before it has plugged it into the vswitch. I can see it makes some
> other calls to Neutron in this time to list security groups and floating
> IPs. Maybe this can be done asynchronously because I don't think they should
> block the initial VM boot to pause that plugs in the VIF.
>
> Completely unrelated to the boot process, the entire tempest run spent ~412
> seconds building and destroying Neutron resources in setup and teardown.[9]
> However, considering the number of tests executed, this seems reasonable so
> I'm not sure we need to work on optimizing that yet.
>
>
> Cheers,
> Kevin Benton
>
>
> 1.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_45_341
> 2.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-n-cpu.txt.gz#_2016-03-21_23_29_50_629
> 3.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-agt.txt.gz#_2016-03-21_23_29_52_216
> 4. https://bugs.launchpad.net/neutron/+bug/1560464
> 5.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-agt.txt.gz#_2016-03-21_23_29_54_738
> 6.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_54_813
> 7.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_56_782
> 8.
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/notifiers/nova.py
> 9. egrep -R 'tearDown|setUp' tempest.txt.gz | grep 9696 | awk '{print
> $(NF)}' | ./fsum
>
> On Mon, Mar 21, 2016 at 5:09 PM, Clark Boylan  wrote:
>>
>> On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
>> > On 03/21/2016 04:09 PM, Clark Boylan wrote:
>> > > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
>> > >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
>> > >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
>> >  Do you have an a better insight of job runtimes vs jobs in other
>> >  projects?
>> >  Most of the time in the job runtime is actually spent setting the
>> >  infrastructure up, and I am not sure we can do anything about it,
>> >  unless
>> >  we
>> >  take this with Infra.
>> > >>>
>> > >>> I haven't done a comparison yet buts lets break down the runtime of
>> > >>> a
>> > >>> recent successful neutron full run against neutron master [0].
>> > >>
>> > >> And now for some comparative data from the gate-tempest-dsvm-full job
>> > >> [0]. This job also ran against a master change that merged and ran in
>> > >> the same cloud and region as the neutron job.
>> > >>
>> > > snip
>> > >> Generally each step of this job was quicker. There were big
>> > >> differences
>> > >> in devstack and tempest run time though. Is devstack much slower to
>> > >> setup neutron when compared to nova net? For tempest it looks like we
>> > >> run ~1510 tests against neutron and only ~1269 against nova net. This
>> > >> may account for the large difference there. I also recall that we run
>> > >> ipv6 tempest tests against neutron deployments that were inefficient
>> > >> and
>> > >> booted 2 qemu VMs per test (not sure if that is still the case but
>> > >> illustrates that the tests themselves may not be very quick in the
>> > >> neutron 

Re: [openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing type:deployment

2016-03-22 Thread Jesse Pretorius
On 22 March 2016 at 09:15, Thierry Carrez  wrote:

>
> For OSA, we don't produce the deployment tool, only a set of playbooks. I
> was thinking we might need a type:packaging tag to describe which things we
> produce are just about packaging OpenStack things for usage by outside
> deployment systems (Ansible, Puppet, Chef, Deb, RPM...). So I'm not sure
> your type:deployment tag would apply to OSA.
>

Yeah, I suppose it depends on how you define 'deployment tool'. OSA is an
umbrella project providing Ansible roles which deploy services, and
playbooks that put them together in an integrated deployment.

Fuel similarly has libraries, Puppet roles, plugins, etc which are all
packaged together to provide what we call 'Fuel'.

I expect that there are other similarities - for instance 'Keystone' may be
a service, but that service has libraries and all combined together we call
it a daemon service.

I guess it would be nice to have some sort of designation to allow easier
filtering for consumers, assuming that this actually does add value to
Operators/Packagers who consume these projects.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Boot instance from volume via Horizon dashboard fails

2016-03-22 Thread Eugen Block

Hi, thanks for your response.

Having said that, there is probably a bug in Horizon since it's  
defaulting to vda for the device name when booting from volume.


I think so, too. I'm quite new to Openstack and not 100% sure about my  
statements, but I think my question is not a support issue. I'm trying  
to find the cause for the faulty results when launching instances from  
volume via dashboard.


The libvirt driver in nova ignores the requested device name in boot  
from volume / volume attach requests


Yes, I see that, too, in default_device_names_for_instance() the  
bdm.device_name is nulled out. But there is also a "root_device_name",  
which keeps the value provided by a user (CLI or Horizon) and this  
value is applied when trying to boot from that volume, which leads to  
an error. Shouldn't the root_device_name also be nulled out?


Sorry, if this is not the right channel to address this issue, I  
thought it was. So if it's not, what is the way to go here?


Regards,
Eugen

Zitat von Matt Riedemann :


On 3/21/2016 10:31 AM, Mike Perez wrote:

On 16:01 Mar 21, Eugen Block wrote:

Hi all,

I'm just a (new) Openstack user, not a developer, but I have a question
regarding the Horizon dashboard respectively launching instances via
dashboard.


Hi Eugen!

Welcome to the community! This mailing list is development focused  
and not our
support channel. You can request help at our general mailing list  
[1], or Ask

OpenStack [2].

[1] - http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] - https://ask.openstack.org/en/questions/



Having said that, there is probably a bug in Horizon since it's  
defaulting to vda for the device name when booting from volume.


The libvirt driver in nova ignores the requested device name in boot  
from volume / volume attach requests since Liberty [1]. It's best to  
let the virt driver in nova pick the device name, you can get the  
mountpoint via the volume attachment later after the volume's status  
is 'in-use'.


[1] https://review.openstack.org/#/c/189632/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][QA] Notes about the move of the Sahara Tempest API test to sahara-tests

2016-03-22 Thread Trevor McKay
Thanks Luigi,

  sounds good to me. I'll be happy to help with reviews/approvals as
needed.

Trevor

On Mon, 2016-03-21 at 12:05 +0100, Luigi Toscano wrote:
> On Monday 21 of March 2016 10:50:30 Evgeny Sikachev wrote:
> > Hi, Luigi!
> > 
> > Thanks for this short spec :)
> > Changes looking good for me, and I think Sahara-team can help with pushing
> > this changes to sahara-tests repo.
> > 
> > But I have a question: why we need to use detached branch for it? 
> A branch with no parent, as it's history is totally detached from the 
> existing 
> code. The merge makes the two branches converge. You can see it in the graph 
> of the history on my work repository:
> https://github.com/ltoscano-rh/sahara-tests/commits/master
> > And then
> > we have the next problem "Temporarily exclude a specific branch from the
> > CI". Maybe force push can be easier and faster?
> 
> In my understanding, force push was totally ruled out by Infra.
> 
> Ciao



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-22 Thread Dmitry Guryanov
For GPT disks and non-UEFI boot this method will work, since MBR will still
contain first stage of a bootloader code. For UEFI boot things are little
more complicated, we have to find EFI system partition, mount it and
remove/edit some files.

On Tue, Mar 22, 2016 at 4:26 PM, Bulat Gaifullin 
wrote:

> What about GPT[1] disks?
> As I know we have plans to support UEFI boot and GPT disks.
>
>
> [1] https://en.wikipedia.org/wiki/GUID_Partition_Table
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> > On 22 Mar 2016, at 13:46, Dmitry Guryanov 
> wrote:
> >
> > On Tue, 2016-03-22 at 13:07 +0300, Dmitry Guryanov wrote:
> >> Hello,
> >>
> >>  ..
> >>
> >> [0] https://github.com/openstack/fuel-astute/blob/master/mcagents/era
> >> se_node.rb#L162-L174
> >> [1] https://github.com/openstack/fuel-
> >> agent/blob/master/fuel_agent/manager.py#L194-L221
> >
> >
> > Sorry, here is a correct link:
> > https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.
> > py#L228-L252
> >
> >
> >>
> >>
> >> --
> >> Dmitry Guryanov
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose EmilienM for core

2016-03-22 Thread Paul Belanger
On Sun, Mar 20, 2016 at 02:22:49PM -0400, Dan Prince wrote:
> I'd like to propose that we add Emilien Macchi to the TripleO core
> review team. Emilien has been getting more involved with TripleO during
> this last release. In addition to help with various Puppet things he
> also has experience in building OpenStack installation tooling,
> upgrades, and would be a valuable prospective to the core team. He has
> also added several new features around monitoring into instack-
> undercloud.
> 
> Emilien is currently acting as the Puppet PTL. Adding him to the
> TripleO core review team could help us move faster towards some of the
> upcoming features like composable services, etc.
> 
> If you agree please +1. If there is no negative feedback I'll add him
> next Monday.
> 
> Dan
> 
While not core in TripleO, I'm happy to +1 for adding EmilienM. I think he would
be a great addition to TripleO to help with puppet and CI systems.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Plan for changes affecting DB schema

2016-03-22 Thread Henry Gessau
na...@vn.fujitsu.com  wrote:
> Two weeks ago, I received an information about changing affecting DB schema
> [1] from Henry Gessau just a day before the deadline. I was so surprised about
> this and could not change my plan for my patch sets. Do you know any plan for
> this ?

There should be no surprises. Neutron follows the OpenStack release schedule
[1]. For Mitaka, it looked like [2].

> In the future, do you have plan for this in Netwon cycle?

The Newton release schedule is at [3], although the details are still being
planned. The detailed dates should be available soo.

[1] http://releases.openstack.org
[2] http://releases.openstack.org/mitaka/schedule.html
[3] http://releases.openstack.org/newton/schedule.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-22 Thread Kevin Benton
Thanks for doing this. I dug into the test_volume_boot_pattern test to see
what was going on.

On the first boot, Nova called Neutron to create a port at 23:29:44 and it
took 441ms to return the port to Nova.[1]
Nova then plugged the interface for that port into OVS a little over 6
seconds later at 23:29:50.[2]
The Neutron agent attempted to process this on the iteration at 23:29:52
[3]; however, it didn't get the ofport populated from the OVSDB monitor...
a bug![4] The Neutron agent did catch it on the next iteration two seconds
later on a retry and notified the Neutron server at 23:29:54.[5]
The Neutron server processed the port ACTIVE change in just under 80ms[6],
but it did not dispatch the notification to Nova until 2 seconds later at
23:29:56 [7] due to the Nova notification batching mechanism[8].

Total time between port create and boot is about 12 seconds. 6 in Nova and
6 in Neutron.

For the Neutron side, the bug fix should eliminate 2 seconds. We could
probably make the Nova notifier batching mechanism a little more aggressive
so it only batches up calls in a very short interval rather than making 2
second buckets at all times. The remaining 2 seconds is just the agent
processing loop interval, which can be tuned with a config but it should be
okay if that's the only bottleneck.

For Nova, we need to improve that 6 seconds after it has created the
Neutron port before it has plugged it into the vswitch. I can see it makes
some other calls to Neutron in this time to list security groups and
floating IPs. Maybe this can be done asynchronously because I don't think
they should block the initial VM boot to pause that plugs in the VIF.

Completely unrelated to the boot process, the entire tempest run spent ~412
seconds building and destroying Neutron resources in setup and teardown.[9]
However, considering the number of tests executed, this seems reasonable so
I'm not sure we need to work on optimizing that yet.


Cheers,
Kevin Benton


1.
http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_45_341
2.
http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-n-cpu.txt.gz#_2016-03-21_23_29_50_629
3.
http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-agt.txt.gz#_2016-03-21_23_29_52_216
4. https://bugs.launchpad.net/neutron/+bug/1560464
5.
http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-agt.txt.gz#_2016-03-21_23_29_54_738
6.
http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_54_813
7.
http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_56_782
8.
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/notifiers/nova.py
9. egrep -R 'tearDown|setUp' tempest.txt.gz | grep 9696 | awk '{print
$(NF)}' | ./fsum

On Mon, Mar 21, 2016 at 5:09 PM, Clark Boylan  wrote:

> On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
> > On 03/21/2016 04:09 PM, Clark Boylan wrote:
> > > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
> > >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
> > >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
> >  Do you have an a better insight of job runtimes vs jobs in other
> >  projects?
> >  Most of the time in the job runtime is actually spent setting the
> >  infrastructure up, and I am not sure we can do anything about it,
> unless
> >  we
> >  take this with Infra.
> > >>>
> > >>> I haven't done a comparison yet buts lets break down the runtime of a
> > >>> recent successful neutron full run against neutron master [0].
> > >>
> > >> And now for some comparative data from the gate-tempest-dsvm-full job
> > >> [0]. This job also ran against a master change that merged and ran in
> > >> the same cloud and region as the neutron job.
> > >>
> > > snip
> > >> Generally each step of this job was quicker. There were big
> differences
> > >> in devstack and tempest run time though. Is devstack much slower to
> > >> setup neutron when compared to nova net? For tempest it looks like we
> > >> run ~1510 tests against neutron and only ~1269 against nova net. This
> > >> may account for the large difference there. I also recall that we run
> > >> ipv6 tempest tests against neutron deployments that were inefficient
> and
> > >> booted 2 qemu VMs per test (not sure if that is still the case but
> > >> illustrates that the tests themselves may not be very quick in the
> > >> neutron case).
> > >
> > > Looking at the tempest slowest tests output for each of these jobs
> > > (neutron and nova net) some tests line up really well across jobs and
> > > others do not. In order to get a better handle on the runtime for
> > > individual tests I have pushed https://review.openstack.org/295487

Re: [openstack-dev] [infra][neutron][nova] publish and update Gerrit dashboard link automatically

2016-03-22 Thread Markus Zoeller
> From: Jeremy Stanley 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 02/18/2016 02:05 AM
> Subject: Re: [openstack-dev] [infra][neutron] publish and update 
> Gerrit dashboard link automatically
> 
> On 2016-02-16 14:52:04 -0700 (-0700), Carl Baldwin wrote:
> [...]
> > No matter how it is done, there is the problem of where to host such a
> > page which can be automatically updated daily (or more often) by this
> > script.
> > 
> > Any thoughts from infra on this?
> 
> A neat idea, and sounds like an evolution of/replacement for
> reviewday[1][2]. Our community already has all the tools it needs
> for running scripts and publishing the results in an automated
> fashion (based on a timer, triggered by merged commits in a Git
> repo, et cetera), as well as running Web servers... you could just
> add a vhost to the openstack_project::static class[3] and then a job
> in our project configuration[4] to update it.
> 
> [1] http://status.openstack.org/reviews/
> [2] http://git.openstack.org/cgit/openstack-infra/reviewday/
> [3] http://git.openstack.org/cgit/openstack-infra/system-config/tree/
> modules/openstack_project/manifests/static.pp
> [4] http://git.openstack.org/cgit/openstack-infra/project-config/tree/
> jenkins/jobs/
> -- 
> Jeremy Stanley

I didn't see this thread back then when it started. I think Nova would
benefit from that too. I didn't find a Neutron related change in [1]
as Jeremy suggested. I'm mainly interested in bug fix changes, ordered
by bug report importance.

@Rossella: 
Are you still working on this or is this solved in another way?

References:
[1] 
http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-22 Thread Bulat Gaifullin
What about GPT[1] disks?
As I know we have plans to support UEFI boot and GPT disks. 


[1] https://en.wikipedia.org/wiki/GUID_Partition_Table

Regards,
Bulat Gaifullin
Mirantis Inc.



> On 22 Mar 2016, at 13:46, Dmitry Guryanov  wrote:
> 
> On Tue, 2016-03-22 at 13:07 +0300, Dmitry Guryanov wrote:
>> Hello,
>> 
>>  ..
>> 
>> [0] https://github.com/openstack/fuel-astute/blob/master/mcagents/era
>> se_node.rb#L162-L174
>> [1] https://github.com/openstack/fuel-
>> agent/blob/master/fuel_agent/manager.py#L194-L221
> 
> 
> Sorry, here is a correct link:
> https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.
> py#L228-L252
> 
> 
>> 
>> 
>> -- 
>> Dmitry Guryanov
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-22 Thread Flavio Percoco

On 21/03/16 21:43 -0400, Adam Young wrote:

I had a good discussion with the Nova folks in IRC today.

My goal was to understand what could talk to what, and the short 
according to dansmith


" any node in nova land has to be able to talk to the queue for any 
other one for the most part: compute->compute, compute->conductor, 
conductor->compute, api->everything. There might be a few exceptions, 
but not worth it, IMHO, in the current architecture."


Longer conversation is here:
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27

Right now, the message queue is a nightmare.  All sorts of sensitive 
information flows over the message queue: Tokens (including admin) are 
the most obvious.  Every piece of audit data. All notifications and 
all control messages.


Before we continue down the path of "anything can talk to anything" 
can we please map out what needs to talk to what, and why?  Many of 
the use cases seem to be based on something that should be kicked off 
by the conductor, such as "migrate, resize, live-migrate" and it 
sounds like there are plans to make that happen.


So, let's assume we can get to the point where, if node 1 needs to 
talk to node 2, it will do so only via the conductor.  With that in 
place, we can put an access control rule in place:


I don't think this is going to scale well. Eventually, this will require
evolving the conductor to some sort of message scheduler, which is pretty much
what the message bus is supposed to do.

1.  Compute nodes can only read from the queue 
compute.-novacompute-.localdomain

2.  Compute nodes can only write to response queues in the RPC vhost
3.  Compute nodes can only write to notification queus in the 
notification host.


I know that with AMQP, we should be able to identify the writer of a 
message.  This means that each compute node should have its own user.  
I have identified how to do that for Rabbit and QPid.  I assume for 
0mq is would make sense to use ZAP (http://rfc.zeromq.org/spec:27) but 
I'd rather the 0mq maintainers chime in here.




NOTE: Gentle reminder that qpidd has been removed from oslo.messaging.

I think you can configure rabbit, amqp1 and other technologies to do what you're
suggesting here without much trouble. TBH, I'm not sure how many chances would
be required in Nova (or even oslo.messaging) but I'd dare to say non are
required.

I think it is safe (and sane) to have the same use on the compute node 
communicate with  Neutron, Nova, and Ceilometer.  This will avoid a 
false sense of security: if one is compromised, they are all going to 
be compromised.  Plan accordingly.


Beyond that, we should have message broker users for each of the 
components that is a client of the broker.


Applications that run on top of the cloud, and that do not get 
presence on the compute nodes, should have their own VHost.  I see 
Sahara on my Tripleo deploy, but I assume there are others.  Either 
they completely get their own vhost, or the apps should share one 
separate from the RPC/Notification vhosts we currently have.  Even 
Heat might fall into this category.


Note that those application users can be allowed to read from the 
notification queues if necessary.  They just should not be using the 
same vhost for their own traffic.


Please tell me if/where I am blindingly wrong in my analysis.



I guess my question is: Have you identified things that need to be changed in
any of the projects for this to be possible? Or is it a pure deployment
recommendation/decision?

I'd argue that any change (assuming changes are required) are likely to happen
in specific projects (Nova, Neutron, etc) and that once this scenario is
supported, it'll remain a deployment choice to follow it or not. If I want my
undercloud services to use a single vhost and a single user, I must be able to
do that. The proposal in this email complicates deployments significantly,
despite it making sense from a security stand point.

One more thing. Depending on the messaging technology, having different virtual
hosts may have an impact on the performance when running under huge loads given
the fact that the data will be partitioned differently and, therefore,
written/read differently. I don't have good data at hand about this, sorry.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] FYI: how-to subscribe to specific bug tags

2016-03-22 Thread Markus Zoeller
I was a little surprised when I found this, maybe you're in the same
position.

Use case:
I'm interested in new/changed bug reports which are flagged with
a specific tag but I don't want to open Launchpad everytime to
query them. I'd like to get a notification by mail.

How to:
https://wiki.openstack.org/wiki/Nova/BugTriage#How_to_Subscribe_to_Tags

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-22 Thread Andrew Laski


On Tue, Mar 22, 2016, at 07:00 AM, Doug Hellmann wrote:
> 
> 
> > On Mar 21, 2016, at 9:43 PM, Adam Young  wrote:
> > 
> > I had a good discussion with the Nova folks in IRC today.
> > 
> > My goal was to understand what could talk to what, and the short according 
> > to dansmith
> > 
> > " any node in nova land has to be able to talk to the queue for any other 
> > one for the most part: compute->compute, compute->conductor, 
> > conductor->compute, api->everything. There might be a few exceptions, but 
> > not worth it, IMHO, in the current architecture."
> > 
> > Longer conversation is here:
> > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27
> > 
> > Right now, the message queue is a nightmare.  All sorts of sensitive 
> > information flows over the message queue: Tokens (including admin) are the 
> > most obvious.  Every piece of audit data. All notifications and all control 
> > messages.
> > 
> > Before we continue down the path of "anything can talk to anything" can we 
> > please map out what needs to talk to what, and why?  Many of the use cases 
> > seem to be based on something that should be kicked off by the conductor, 
> > such as "migrate, resize, live-migrate" and it sounds like there are plans 
> > to make that happen.
> > 
> > So, let's assume we can get to the point where, if node 1 needs to talk to 
> > node 2, it will do so only via the conductor.  With that in place, we can 
> > put an access control rule in place:

If you specifically mean compute nodes by "node 1" and "node 2" that is
something that's being worked towards. If you mean "node" more generally
that's not something that is planned.

> 
> Shouldn't we be trying to remove central bottlenecks by decentralizing
> communications where we can?

I think that's a good goal to continue having. Some deployers have setup
firewalls between compute nodes, or between compute nodes and the
database, so we use the conductor to facilitate communications between
those nodes. But in general we don't want to send all communications
through the conductor. 

> 
> Doug
> 
> 
> > 
> > 1.  Compute nodes can only read from the queue 
> > compute.-novacompute-.localdomain
> > 2.  Compute nodes can only write to response queues in the RPC vhost
> > 3.  Compute nodes can only write to notification queus in the notification 
> > host.
> > 
> > I know that with AMQP, we should be able to identify the writer of a 
> > message.  This means that each compute node should have its own user.  I 
> > have identified how to do that for Rabbit and QPid.  I assume for 0mq is 
> > would make sense to use ZAP (http://rfc.zeromq.org/spec:27) but I'd rather 
> > the 0mq maintainers chime in here.
> > 
> > I think it is safe (and sane) to have the same use on the compute node 
> > communicate with  Neutron, Nova, and Ceilometer.  This will avoid a false 
> > sense of security: if one is compromised, they are all going to be 
> > compromised.  Plan accordingly.
> > 
> > Beyond that, we should have message broker users for each of the components 
> > that is a client of the broker.
> > 
> > Applications that run on top of the cloud, and that do not get presence on 
> > the compute nodes, should have their own VHost.  I see Sahara on my Tripleo 
> > deploy, but I assume there are others.  Either they completely get their 
> > own vhost, or the apps should share one separate from the RPC/Notification 
> > vhosts we currently have.  Even Heat might fall into this category.
> > 
> > Note that those application users can be allowed to read from the 
> > notification queues if necessary.  They just should not be using the same 
> > vhost for their own traffic.
> > 
> > Please tell me if/where I am blindingly wrong in my analysis.
> > 
> > 
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-03-22 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-22 Thread gordon chung


On 21/03/2016 9:43 PM, Adam Young wrote:
> I had a good discussion with the Nova folks in IRC today.
>
> My goal was to understand what could talk to what, and the short
> according to dansmith
>
> " any node in nova land has to be able to talk to the queue for any
> other one for the most part: compute->compute, compute->conductor,
> conductor->compute, api->everything. There might be a few exceptions,
> but not worth it, IMHO, in the current architecture."
>
> Longer conversation is here:
>   
> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27
>
> Right now, the message queue is a nightmare.  All sorts of sensitive
> information flows over the message queue: Tokens (including admin) are
> the most obvious.  Every piece of audit data. All notifications and all
> control messages.
>
> Before we continue down the path of "anything can talk to anything" can
> we please map out what needs to talk to what, and why?  Many of the use
> cases seem to be based on something that should be kicked off by the
> conductor, such as "migrate, resize, live-migrate" and it sounds like
> there are plans to make that happen.
>

i think the community is split on "anything can talk to anything". this 
was a side topic[1] on another thread a few months ago regarding 
versioned notifications(FUN!).

i agree with you that we should map what talks with what and why. i 
personally think it would reduce load and security risk on queue since 
the messages and the content would be tailored to consumer rather than 
the current grab bag dump. the counter argument was that it's a lot less 
flexible and contrary to the purpose of pub/sub.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080215.html

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Smaug]- IRC Meeting tomorrow (03/22) - 1400 UTC

2016-03-22 Thread Saggi Mizrahi
Hi All,

We will hold our bi-weekly IRC meeting today (Tuesday, 03/22) at 1400
UTC in #openstack-meeting

Please review the proposed meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/smaug

Please feel free to add to the agenda any subject you would like to discuss.

Thanks,
Saggi


-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Plan for changes affecting DB schema

2016-03-22 Thread Ihar Hrachyshka

na...@vn.fujitsu.com wrote:


Hi everyone,

Two weeks ago, I received an information about changing affecting DB  
schema [1] from Henry Gessau just a day before the deadline. I was so  
surprised about this and could not change my plan for my patch sets. Do  
you know any plan for this ?

In the future, do you have plan for this in Netwon cycle?

[1]  
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088680.html




I don’t see any issue with updating the patches you have for Newton. You  
have 6 months to get your patches in Newton. The alembic cut-off date for  
Newton will occur at the end of the current cycle.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest][cinder] jenkins fails due to server fault

2016-03-22 Thread Lenny Verkhovsky
Also our Mellanox CI failed See log[1]
Looks like missing function

/opt/stack/tempest/.tox/venv/local/lib/python2.7/site-packages/tempest_lib/__init__.py:28:
 DeprecationWarning: tempest-lib is deprecated for future bug-fixes and code 
changes in favor of tempest. Please change your imports from tempest_lib to 
tempest.lib
2016-03-18 11:06:56.149 |   DeprecationWarning)
2016-03-18 11:06:56.322 | Traceback (most recent call last):
2016-03-18 11:06:56.322 |   File ".tox/venv/bin/tempest", line 10, in 
2016-03-18 11:06:56.322 | sys.exit(main())
2016-03-18 11:06:56.322 |   File "/opt/stack/tempest/tempest/cmd/main.py", line 
48, in main
2016-03-18 11:06:56.322 | return the_app.run(argv)
2016-03-18 11:06:56.322 |   File 
"/opt/stack/tempest/.tox/venv/local/lib/python2.7/site-packages/cliff/app.py", 
line 226, in run
2016-03-18 11:06:56.322 | result = self.run_subcommand(remainder)
2016-03-18 11:06:56.322 |   File 
"/opt/stack/tempest/.tox/venv/local/lib/python2.7/site-packages/cliff/app.py", 
line 309, in run_subcommand
2016-03-18 11:06:56.322 | subcommand = 
self.command_manager.find_command(argv)
2016-03-18 11:06:56.322 |   File 
"/opt/stack/tempest/.tox/venv/local/lib/python2.7/site-packages/cliff/commandmanager.py",
 line 75, in find_command
2016-03-18 11:06:56.322 | cmd_factory = cmd_ep.resolve()
2016-03-18 11:06:56.322 |   File 
"/opt/stack/tempest/.tox/venv/local/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2208, in resolve
2016-03-18 11:06:56.323 | module = __import__(self.module_name, 
fromlist=['__name__'], level=0)
2016-03-18 11:06:56.323 |   File 
"/opt/stack/tempest/tempest/cmd/verify_tempest_config.py", line 29, in 
2016-03-18 11:06:56.323 | from tempest import clients
2016-03-18 11:06:56.323 |   File "/opt/stack/tempest/tempest/clients.py", line 
158, in 
2016-03-18 11:06:56.323 | from 
tempest.services.volume.v1.json.backups_client import BackupsClient
2016-03-18 11:06:56.323 |   File 
"/opt/stack/tempest/tempest/services/volume/v1/json/backups_client.py", line 
16, in 
2016-03-18 11:06:56.323 | from tempest.services.volume.base import 
base_backups_client
2016-03-18 11:06:56.323 |   File 
"/opt/stack/tempest/tempest/services/volume/base/base_backups_client.py", line 
21, in 
2016-03-18 11:06:56.323 | from tempest.lib.common import rest_client
2016-03-18 11:06:56.323 | ImportError: No module named lib.common
2016-03-18 11:06:56.348 | ERROR: InvocationError: 
'/opt/stack/tempest/.tox/venv/bin/tempest verify-config -uro 
../../../tmp/tmp.c1QPWmYfhW'
2016-03-18 11:06:56.349 | ___ summary 

2016-03-18 11:06:56.349 | ERROR:   venv: commands failed



[1] 
http://144.76.193.39/ci-artifacts/284106/3/Tempest-Sriov/logs/stack.sh.log.gz

From: Ivan Kolodyazhny [mailto:e...@e0ne.info]
Sent: Tuesday, March 22, 2016 1:39 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Tempest][cinder] jenkins fails due to server fault

Poornima,

Both Masayuki commented in the review request. For more details you can check 
cinder-api logs.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Mon, Mar 21, 2016 at 8:18 AM, Poornima Kshirsagar 
> wrote:
Hi,

Wrote a Tempest test to check force_delete_backup method [1].

Check the tempest test run was successful see [2]

However jenkins fails with below trace back



Captured traceback:
2016-03-18 09:14:16.884 | ~~~
2016-03-18 09:14:16.884 | Traceback (most recent call last):
2016-03-18 09:14:16.884 |   File 
"tempest/api/volume/admin/test_volumes_backup.py", line 55, in 
test_volume_backup_create_and_force_delete_when_available
2016-03-18 09:14:16.884 | 
self.backups_adm_client.force_delete_backup(backup['id'])
2016-03-18 09:14:16.884 |   File 
"tempest/services/volume/base/base_backups_client.py", line 119, in 
force_delete_backup
2016-03-18 09:14:16.884 | resp, body = self.post('backups/%s/action' % 
(backup_id), post_body)
2016-03-18 09:14:16.884 |   File "tempest/lib/common/rest_client.py", line 
259, in post
2016-03-18 09:14:16.885 | return self.request('POST', url, 
extra_headers, headers, body)
2016-03-18 09:14:16.885 |   File "tempest/lib/common/rest_client.py", line 
642, in request
2016-03-18 09:14:16.885 | resp, resp_body)
2016-03-18 09:14:16.885 |   File "tempest/lib/common/rest_client.py", line 
761, in _error_checker
2016-03-18 09:14:16.885 | message=message)
2016-03-18 09:14:16.885 | tempest.lib.exceptions.ServerFault: Got server 
fault
2016-03-18 09:14:16.885 | Details: The server has either erred or is 
incapable of performing the requested operation.
2016-03-18 09:14:16.886 |



is this known issue  how do i fix it?



[1] https://review.openstack.org/#/c/284106/

[2] http://paste.openstack.org/show/491221/



Re: [openstack-dev] [Tempest][cinder] jenkins fails due to server fault

2016-03-22 Thread Ivan Kolodyazhny
Poornima,

Both Masayuki commented in the review request. For more details you can
check cinder-api logs.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Mon, Mar 21, 2016 at 8:18 AM, Poornima Kshirsagar 
wrote:

> Hi,
>
> Wrote a Tempest test to check force_delete_backup method [1].
>
> Check the tempest test run was successful see [2]
>
> However jenkins fails with below trace back
>
> 
>
> Captured traceback:
> 2016-03-18 09:14:16.884 | ~~~
> 2016-03-18 09:14:16.884 | Traceback (most recent call last):
> 2016-03-18 09:14:16.884 |   File
> "tempest/api/volume/admin/test_volumes_backup.py", line 55, in
> test_volume_backup_create_and_force_delete_when_available
> 2016-03-18 09:14:16.884 |
>  self.backups_adm_client.force_delete_backup(backup['id'])
> 2016-03-18 09:14:16.884 |   File
> "tempest/services/volume/base/base_backups_client.py", line 119, in
> force_delete_backup
> 2016-03-18 09:14:16.884 | resp, body =
> self.post('backups/%s/action' % (backup_id), post_body)
> 2016-03-18 09:14:16.884 |   File "tempest/lib/common/rest_client.py",
> line 259, in post
> 2016-03-18 09:14:16.885 | return self.request('POST', url,
> extra_headers, headers, body)
> 2016-03-18 09:14:16.885 |   File "tempest/lib/common/rest_client.py",
> line 642, in request
> 2016-03-18 09:14:16.885 | resp, resp_body)
> 2016-03-18 09:14:16.885 |   File "tempest/lib/common/rest_client.py",
> line 761, in _error_checker
> 2016-03-18 09:14:16.885 | message=message)
> 2016-03-18 09:14:16.885 | tempest.lib.exceptions.ServerFault: Got
> server fault
> 2016-03-18 09:14:16.885 | Details: The server has either erred or is
> incapable of performing the requested operation.
> 2016-03-18 09:14:16.886 |
>
> 
>
> is this known issue  how do i fix it?
>
>
>
> [1] https://review.openstack.org/#/c/284106/
>
> [2] http://paste.openstack.org/show/491221/
>
>
> Regards
> Poornima
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FFE for fuel-openstack-tasks and fuel-remove-conflict-openstack

2016-03-22 Thread Bogdan Dobrelya
On 03/22/2016 09:25 AM, Matthew Mosesohn wrote:
> Andrew,
> 
> The stubs + deprecation warning is exactly the approach I believewe
> should take for renaming/moving tasks.

LGTM to me as far as it keeps plugins intact. So let's please update the
patch [0] or submit required patches to unblock it.

[0] https://review.openstack.org/#/c/283332/

> 
> If it was possible for a plugin to override a task, but preserve the
> fields from the original task, we could avoid such scenarios. What I
> mean is that if the following task:
> 
> - id: workloads_collector_add
>   type: puppet
>   version: 2.0.0
>   groups: [primary-controller]
>   required_for: [deploy_end]
>   requires: [keystone, primary-keystone]
>   parameters:
> puppet_manifest:
> /etc/puppet/modules/osnailyfacter/modular/keystone/workloads_collector_add.pp
> puppet_modules: /etc/puppet/modules
> timeout: 1800
> 
> If we could override the groups field only, a plugin developer would not
> need to copy and paste the dependencies and other parameters. But until
> that works, we should effectively deprecate top level tasks whenever
> possible.
> 
> On Mar 22, 2016 2:52 AM, "Andrew Woodward"  > wrote:
> 
> I've mocked up the change to implementation using the already landed
> changes to ceph as an example
> 
> https://review.openstack.org/295571 
> 
> On Mon, Mar 21, 2016 at 3:44 PM Andrew Woodward  > wrote:
> 
> We had originally planned for the FFEs for both
> fuel-openstack-tasks[1] and fuel-remove-conflict-openstack to
> [2] to close on 3/20, This would have placed them before changes
> that conflict with
> fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3].
> 
> [1]
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088297.html
> [2]
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088298.html
> [3]
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/089028.html
> 
> However we found this morning that the changes from [2], and
> more of issue [1] will result in further issues such as [4],
> where as the task files move, any task that explicitly relied on
> it, now no longer is in the same path.
> 
> [4] https://review.openstack.org/#/c/295170/
> 
> Due to this newly identified issue with backwards comparability.
> It appears that [4] shows that we have plugins using interfaces
> that we don't have formal coverage for so If we introduce this
> set of changes, we will cause breakage for plugins that use
> fuel's current tasks.
> 
> From a deprecation standpoint we don't have a way to deal with
> this, unless  fuel-openstack-tasks [1] lands after
> fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3].
> In this case we can take advantage of the class include stubs,
> leaving a copy in the old location
> (osnailyfacter/modular/roles/compute.pp) pointing to the new
> include location (include openstack_tasks::roles::compute) and
> adding a warning for deprecation. The tasks includes in the new
> location openstack_tasks/examples/roles/compute.pp would simply
> include the updated class location w/o the warning.
> 
> This would take care of [1] and it's review [5]
> 
> [5] https://review.openstack.org/283332
> 
> This still leaves [2] un-addressed, we still have 3 open CR for it:
> 
> [6] Compute https://review.openstack.org/285567
> [7] Cinder https://review.openstack.org/294736
> [8] Swift https://review.openstack.org/294979
> 
> Compute [6] is in good shape, while Cinder [7] and Swift [8] are
> not. For these do we want to continue to land them, if so what
> do we want to do about the now deprecated openstack:: tasks? We
> could leave them in place with a warning since we would not be
> using them
> 
> -- 
> 
> --
> 
> Andrew Woodward
> 
> Mirantis
> 
> Fuel Community Ambassador
> 
> Ceph Community
> 
> -- 
> 
> --
> 
> Andrew Woodward
> 
> Mirantis
> 
> Fuel Community Ambassador
> 
> Ceph Community
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [release][documentation] openstack-doc-tools 0.34.0 release

2016-03-22 Thread no-reply
We are glad to announce the release of:

openstack-doc-tools 0.34.0: Tools for OpenStack Documentation

With source available at:

http://git.openstack.org/cgit/openstack/openstack-doc-tools

Please report issues through launchpad:

http://bugs.launchpad.net/openstack-manuals

For more details, please see below.

Changes in openstack-doc-tools 0.33.0..0.34.0
-

d01a68d Do not publish translated Debian Guide
87f23f5 Make diff_branches.py work again
6264763 autohelp: add zaqar to the default projects
74a5a0d Fix a typo in autogenerate_config_docs/README.rst
c423d9e Remove parallel building
cba7f7f Better identify deprecated configuration options
557d0af Provide a more human identifable option data type
4bca964 Catch sqlalchemy.exc.InvalidRequestError exception
0c4e6e2 Updated prerequisite packages for ironic & swift
2530250 Remove parallel building
360f159 [nova-cli-ref] Adjust long line for check niceness
0e5bff7 Note to add 'jinja2' and 'markupsafe' to requirements
028dcd0 Change help category markup
9a4ffef Updated from global requirements
92944e1 Remove markdown script
1f17e5d [autohelp] Remove oslo.incubator installation
faaee70 Change Git message got cli reference update
561228a Use pep8 instead of linters
a12917d Add option pattern to cli ref tool

Diffstat (except docs and test files)
-

CONTRIBUTING.rst |  6 +++
autogenerate_config_docs/README.rst  | 12 +++--
autogenerate_config_docs/autohelp-wrapper| 31 +---
autogenerate_config_docs/autohelp.py | 30 ++-
autogenerate_config_docs/diff_branches.py| 45 +
autogenerate_config_docs/requirements.txt|  1 -
bin/doc-tools-build-rst  | 10 +---
bin/doc-tools-check-languages| 18 +++
bin/doc-tools-update-cli-reference   |  4 +-
os_doc_tools/commands.py |  9 +++-
os_doc_tools/doctest.py  | 30 ---
os_doc_tools/scripts/markdown-docbook.sh | 74 
os_doc_tools/scripts/pandoc-template.docbook | 21 
requirements.txt |  2 +-
tox.ini  |  7 ++-
15 files changed, 103 insertions(+), 197 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 064da08..7406383 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ lxml>=2.3 # BSD
-oslo.config>=3.4.0 # Apache-2.0
+oslo.config>=3.7.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][horizon][i18n] can we release django-openstack-auth of stable/mitaka for translations

2016-03-22 Thread Doug Hellmann

> On Mar 22, 2016, at 6:18 AM, Akihiro Motoki  wrote:
> 
> Hi release management team,
> 
> Can we have a new release of django-openstack-auth from stable/mitaka
> branch for translations?
> 
> What is happening?
> django-openstack-auth is a library project consumed by Horizon.
> The (soft) string freeze happens when the milestone-3 is cut.
> The milestone-3 is also the dependency freeze.
> This is a dilemma between dependency freeze and translation start,
> and there is no chance to import translations of django-openstack-auth
> for Mitaka.
> There are several updates of translations after 2.2.0 (mitaka) release [1].
> As the i18n team, we would like to have a released version of
> django-openstack-auth
> with up-to-date translations.
> 
> Which version?
> The current version of django-openstack-auth for Mitaka is 2.2.0.
> What version number is recommended, 2.2.1 or 2.3.0?

Stable branches for libraries should only ever increment the patch level, so 
2.2.1. 

> 
> When?
> Hopefully a new version is released soon around Mitaka is shipped.
> The current translation deadline is set to Mar 28 (the beginning of
> the release week).
> In my understanding we avoid releasing a new version of library before
> the Mitaka release.
> Distributors can choose which version is included in their distribution.

Even if we don't do the release before the end of this cycle, we can release it 
as a stable update. Either way, when you are ready for a new release submit the 
patch to openstack/releases and include in the commit message the note that the 
update includes translations. 

Do you think it would be possible for Newton to start translations for 
libraries sooner, before their freeze date?

Doug

> 
> Any suggestions would be appreciated.
> 
> Thanks,
> Akihiro
> 
> [1] 
> https://review.openstack.org/#/q/topic:zanata/translations+project:openstack/django_openstack_auth
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election][ec2-api][winstackers][stable] status of teams without PTL candidates

2016-03-22 Thread Claudiu Belu
Hello everyone,

My name is Claudiu Belu, the current PTL for the Winstackers team. I apologize 
for missing such an important matter. It is a new project / team, I didn't 
expect to go through elections this cycle.
I will be more careful in the future.
As for the os-win, the project has been actively evolving and it is currently 
used in several projects: nova, cinder, ceilometer, networking-hyperv, 
compute-hyperv (os-brick, not yet due to the Feature Freeze),
and we've solved some non-trivial, long lasting issues, and improved the 
overall performance greatly. The team is quite small, but there are new people 
joining in and attending the weekly Hyper-V meetings.

There are still things that can be done and there is still room for 
improvement. Which is why I would like to propose myself for the Winstackers 
PTL for Newton.

We will be attending the OpenStack summit and we have one worksession.

Again, I apologize for the inconvenience.

Best regards,

Claudiu Belu


From: Mike Perez [thin...@gmail.com]
Sent: Tuesday, March 22, 2016 5:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc][election][ec2-api][winstackers][stable] 
status of teams without PTL candidates

On 12:33 Mar 21, Doug Hellmann wrote:
>
> > On Mar 21, 2016, at 12:03 PM, Alexandre Levine  
> > wrote:
> >
> > Doug,
> >
> > Let me clarify a bit the situation.
> > Before this February there wasn't such a project at all. EC2 API was
> > a built-in part of nova so no dedicated PTL was required. The built-in part
> > got removed and our project got promoted. We're a team of 3 developers
> > which nevertheless are committed to this support for year and a half
> > already. The reason I didn't nominate myself is solely because I'm new to
> > the process and I thought that first cycle will actually start from Mitaka
> > so I didn't have to bother. I hope it's forgivable and our ongoing support
> > of the code to make sure it works with both OpenStack and Amazon will make
> > up for it if a little.
>
> Yes, please don't take my original proposal as anything other than me
> suggesting some "clean up" based on me not having all the info about the
> status of EC2. If we need to clarify that all projects are expected to
> participate in elections, that's something we can address. I'll look at
> wording of the existing requirements in the next week or so. If the team has
> a leader, you're all set and I'm happy to support keeping EC2 an official
> team.

Hope this cover things:

https://review.openstack.org/#/c/295581
https://review.openstack.org/#/c/295609
https://review.openstack.org/#/c/295611

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-22 Thread Doug Hellmann


> On Mar 21, 2016, at 9:43 PM, Adam Young  wrote:
> 
> I had a good discussion with the Nova folks in IRC today.
> 
> My goal was to understand what could talk to what, and the short according to 
> dansmith
> 
> " any node in nova land has to be able to talk to the queue for any other one 
> for the most part: compute->compute, compute->conductor, conductor->compute, 
> api->everything. There might be a few exceptions, but not worth it, IMHO, in 
> the current architecture."
> 
> Longer conversation is here:
> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27
> 
> Right now, the message queue is a nightmare.  All sorts of sensitive 
> information flows over the message queue: Tokens (including admin) are the 
> most obvious.  Every piece of audit data. All notifications and all control 
> messages.
> 
> Before we continue down the path of "anything can talk to anything" can we 
> please map out what needs to talk to what, and why?  Many of the use cases 
> seem to be based on something that should be kicked off by the conductor, 
> such as "migrate, resize, live-migrate" and it sounds like there are plans to 
> make that happen.
> 
> So, let's assume we can get to the point where, if node 1 needs to talk to 
> node 2, it will do so only via the conductor.  With that in place, we can put 
> an access control rule in place:

Shouldn't we be trying to remove central bottlenecks by decentralizing 
communications where we can?

Doug


> 
> 1.  Compute nodes can only read from the queue 
> compute.-novacompute-.localdomain
> 2.  Compute nodes can only write to response queues in the RPC vhost
> 3.  Compute nodes can only write to notification queus in the notification 
> host.
> 
> I know that with AMQP, we should be able to identify the writer of a message. 
>  This means that each compute node should have its own user.  I have 
> identified how to do that for Rabbit and QPid.  I assume for 0mq is would 
> make sense to use ZAP (http://rfc.zeromq.org/spec:27) but I'd rather the 0mq 
> maintainers chime in here.
> 
> I think it is safe (and sane) to have the same use on the compute node 
> communicate with  Neutron, Nova, and Ceilometer.  This will avoid a false 
> sense of security: if one is compromised, they are all going to be 
> compromised.  Plan accordingly.
> 
> Beyond that, we should have message broker users for each of the components 
> that is a client of the broker.
> 
> Applications that run on top of the cloud, and that do not get presence on 
> the compute nodes, should have their own VHost.  I see Sahara on my Tripleo 
> deploy, but I assume there are others.  Either they completely get their own 
> vhost, or the apps should share one separate from the RPC/Notification vhosts 
> we currently have.  Even Heat might fall into this category.
> 
> Note that those application users can be allowed to read from the 
> notification queues if necessary.  They just should not be using the same 
> vhost for their own traffic.
> 
> Please tell me if/where I am blindingly wrong in my analysis.
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-22 Thread Dmitry Guryanov
On Tue, 2016-03-22 at 13:07 +0300, Dmitry Guryanov wrote:
> Hello,
> 
> ..
> 
> [0] https://github.com/openstack/fuel-astute/blob/master/mcagents/era
> se_node.rb#L162-L174
> [1] https://github.com/openstack/fuel-
> agent/blob/master/fuel_agent/manager.py#L194-L221


Sorry, here is a correct link:
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.
py#L228-L252


> 
> 
> -- 
> Dmitry Guryanov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][horizon][i18n] can we release django-openstack-auth of stable/mitaka for translations

2016-03-22 Thread Akihiro Motoki
Hi release management team,

Can we have a new release of django-openstack-auth from stable/mitaka
branch for translations?

What is happening?
django-openstack-auth is a library project consumed by Horizon.
The (soft) string freeze happens when the milestone-3 is cut.
The milestone-3 is also the dependency freeze.
This is a dilemma between dependency freeze and translation start,
and there is no chance to import translations of django-openstack-auth
for Mitaka.
There are several updates of translations after 2.2.0 (mitaka) release [1].
As the i18n team, we would like to have a released version of
django-openstack-auth
with up-to-date translations.

Which version?
The current version of django-openstack-auth for Mitaka is 2.2.0.
What version number is recommended, 2.2.1 or 2.3.0?

When?
Hopefully a new version is released soon around Mitaka is shipped.
The current translation deadline is set to Mar 28 (the beginning of
the release week).
In my understanding we avoid releasing a new version of library before
the Mitaka release.
Distributors can choose which version is included in their distribution.

Any suggestions would be appreciated.

Thanks,
Akihiro

[1] 
https://review.openstack.org/#/q/topic:zanata/translations+project:openstack/django_openstack_auth

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][ironic] ironic-python-agent 1.2.0 release (mitaka)

2016-03-22 Thread no-reply
We are delighted to announce the release of:

ironic-python-agent 1.2.0: Ironic Python Agent Ramdisk

This release is part of the mitaka stable release series.

For more details, please see below.

1.2.0
^


New Features


* Add new 'system_vendor' information to data - Add hardware vendor
  information(product name, serial number, manufacturer) to data. This
  will be able to give Ironic Inspector hints to detect driver.

* Add support for partition images in IPA. This commit adds the
  ironic-lib as the requirement for the IPA package.

* Debug logging can now be enabled by setting "ipa-debug" kernel
  parameter.

* Root device hints extended to support the device name.

* Add a new sync() command to the standby extension. When invoked,
  the new command is responsible for flushing the file system buffers
  to the disk.


Bug Fixes
*

* dmidecode output produces less parsing errors and logs common and
  normal output like "No Module Installed" or "Not Installed" in debug
  instead of error.

* IPA will now advertise IP address via which the Ironic API is
  routed instead of using the first available. See
  https://bugs.launchpad.net /ironic-python-agent/+bug/1558956.

* This enables virtual media deploy even if virtual floppy device
  name is capitalized to "/dev/disk/by-label/IR-VFD-DEV". see
  https://bugs.launchpad.net/ironic/+bug/1541167 for details.

* Stop using SYSRQ when performing the in-band reboot or power off
  because it has a similar effect to a hardware reset button/power
  switch and can be problematic on some hardware types. Instead,
  reboot/power off the node via the "poweroff" and "reboot" commands
  (soft power action).

* Ensure block devices are detected by the host OS before listing
  them.

Changes in ironic-python-agent 1.1.0..1.2.0
---

936b2e4 Fixes the agent message for uefi netboot for partition image
b936829 Add psmisc and dosfstools to IPA packages list
6829d34 Bind to interface routable to the ironic host, not a random one
7a24ba8 Fix tinyipa build uname, picking up hosts kernel
d3f6cfb Fix build tinyipa to work on Fedora
cbd90c1 Replace SYSRQ commands
4b802c4 Add sync() command to the standby module
944595a Add support for partition images in agent driver.
bb60578 Fixes programmatic error in _install_grub()
f09dce7 Fix programmatic error in heartbeat()
58f86d0 Stop trying to log stdout when fetching logs during inspection
944dc4e CoreOS: Disable unused services
d25d94b Change to use WARNING level for heartbeat conflict errors
055998c Wait for udev to settle before listing the block devices
b5f9d31 Updated from global requirements
f961169 Update DIB description for IPA docs
1437e15 Allow enabling debug level via kernel cmdline
8bf5651 Add DIB ironic-agent element to readme for IPA
7fe40bb Replace all the 'self.log' calls with global LOG
d66fa52 Reduced restriction of parsing for dmidecode output
c716293 Catch OSError as well to return a better error message
b8e8927 Updated from global requirements
c9674da Document hardware inventory sent to lookup and inspection
52fc4f8 Update unit tests to use six.moves.builtins.open
f9344a7 TinyIPA: Prevent install of pre-release dependencies
589145b TinyIPA: Explicitly use /bin/bash instead of /bin/sh
3823a53 Add 'system_vendor' information to data
855e301 Updated from global requirements
8f5ed3e Clear GPT and MBR data structures on disk before imaging
73f81f2 Fix vfd mount for capitalized device name
1ffaaf6 Add support for proxy servers during image build
df701c9 Replace backoff looping call with oslo_service provided version
7c85ed8 Leave git installed in docker builder
6752ce8 Extend root device hints to support device name
632c7e6 Add tinyipa to IPA imagebuild directory
33b482a Updated from global requirements
0090512 Disable xattrs in IPA extraction
fac700c Change assertTrue(isinstance()) by optimal assert
61b4387 Allow hardware managers to override clean step priority
da90010 Update typos
2b07976 Fix params order in assertEqual
b563196  make enforce_type=True in CONF.set_override
a70d994 Switch to post-versioning
8522a55 Remove unused logging

Diffstat (except docs and test files)
-

.gitignore |   1 +
Dockerfile |  36 ++-
README.rst |   3 +-
imagebuild/README.rst  |   3 +
imagebuild/coreos/docker_build.bash|  17 +
imagebuild/coreos/docker_clean.bash|   2 +-
imagebuild/coreos/oem/cloud-config.yml |  20 +-
imagebuild/tinyipa/.gitignore  |  12 +
imagebuild/tinyipa/Makefile|  31 ++
imagebuild/tinyipa/README.rst  |  82 +
imagebuild/tinyipa/build-iso.sh|  16 +
imagebuild/tinyipa/build-tinyipa.sh|  91 ++

[openstack-dev] [Solar] Weekly update

2016-03-22 Thread Jedrzej Nowak
Hello,

This is weekly update from Solar project.

F2S:
- we're working on Class2 LCM demo using Fuel tasks

Solar itself:
- staging procedure supports now both implicit and explicit stages
- centos 7 patches merged (can be used instead of ubuntu)
- better pg pool implementation
- various bug fixes

Also, last Saturday I had a short presentation about Solar in Katowice.

--
Warm regards
Jedrzej Nowak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-22 Thread Dmitry Guryanov
Hello,

Here is a start of the discussion -
http://lists.openstack.org/pipermail/openstack-dev/2015-December/083021.html
. I've subscribed to this mailing list later, so can reply there.

Currently we clear node's disks in two places. The first one is before
reboot into bootstrap image [0] and the second - just before provisioning
in fuel-agent [1].

There are two problems, which should be solved with erasing first megabyte
of disk data: node should not boot from hdd after reboot and new
partitioning scheme should overwrite the previous one.

The first problem could be solved with zeroing first 512 bytes of each disk
(not partition). Even 446 to be precise, because last 66 bytes are
partition scheme, see
https://wiki.archlinux.org/index.php/Master_Boot_Record .

The second problem should be solved only after reboot into bootstrap.
Because if we bring a new node to the cluster from some other place and
boot it with bootstrap image it will possibly have disks with some
partitions, md devices and lvm volumes. So all these entities should be
correctly cleared before provisioning, not before reboot. And fuel-agent
does it in [1].

I propose to remove erasing first 1M of each partiton, because it can lead
to errors in FS kernel drivers and kernel panic. An existing workaround,
that in case of kernel panic we do reboot is bad because it may occur just
after clearing first partition of the first disk and after reboot bios will
read MBR of the second disk and boot from it instead of network. Let's just
clear first 446 bytes of each disk.


[0]
https://github.com/openstack/fuel-astute/blob/master/mcagents/erase_node.rb#L162-L174
[1]
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L194-L221


-- 
Dmitry Guryanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] - Austin design summit sessions

2016-03-22 Thread Gal Sagie
Hello all,

We have started this etherpad [1] in order to asses the number of sessions
needed
for Kuryr.

We have received 5 work sessions and 1 fishbowl session. [2]
Lets brainstorm and propose more ideas on the etherpad and start splitting
the
subjects to sessions with priorities.

Anyone that wish to volunteer and lead a specific session is more then
welcome
to propose him/her self on the etherpad.

[1] https://etherpad.openstack.org/p/kuryr-design-summit
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/089467.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing type:deployment

2016-03-22 Thread Thierry Carrez

Steven Dake (stdake) wrote:

Technical Committee,

Please accept my proposal of a new type of project called a deployment
[1].  If people don’t like the type name, we can change it.  The basic
idea is there are a class of projects unrepresented by type:service and
type:library which are deployment projects including but not limited to
Fuel, Kolla, OSA, and TripleO.  The main motivation behind this addition
are:

 1. Make it known to all which projects are deployment projects in the
governance repository.
 2. Provide that information via the governance website under release
management tags.
 3. Permit deployment projects to take part in the assert tags relating
to upgrades [2].


Currently fuel is listed as a type:service in the governance repository
which is only partially accurate.  It may provide a ReST API, but during
the Kolla big tent application process, we were told we couldn't use
type:service as it only applied to daemon services and not deployment
projects.


I agree that type:service is not really a good match for Fuel or Kolla, 
and we could definitely use something else -- that would make it a lot 
clearer what is what for the downstream consumers of the software we 
produce.


One issue is that tags are applied to deliverables, not project teams. 
For the Fuel team it's pretty clear (it would apply to their "fuel" 
deliverable). For Kolla team, I suspect it would apply to the "kolla" 
deliverable. But the TripleO team produces a collection of tools, so 
it's unclear which of those would be considered the main "deployment" 
thing.


For OSA, we don't produce the deployment tool, only a set of playbooks. 
I was thinking we might need a type:packaging tag to describe which 
things we produce are just about packaging OpenStack things for usage by 
outside deployment systems (Ansible, Puppet, Chef, Deb, RPM...). So I'm 
not sure your type:deployment tag would apply to OSA.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FFE for fuel-openstack-tasks and fuel-remove-conflict-openstack

2016-03-22 Thread Matthew Mosesohn
Andrew,

The stubs + deprecation warning is exactly the approach I believewe should
take for renaming/moving tasks.

If it was possible for a plugin to override a task, but preserve the fields
from the original task, we could avoid such scenarios. What I mean is that
if the following task:

- id: workloads_collector_add
  type: puppet
  version: 2.0.0
  groups: [primary-controller]
  required_for: [deploy_end]
  requires: [keystone, primary-keystone]
  parameters:
puppet_manifest:
/etc/puppet/modules/osnailyfacter/modular/keystone/workloads_collector_add.pp
puppet_modules: /etc/puppet/modules
timeout: 1800

If we could override the groups field only, a plugin developer would not
need to copy and paste the dependencies and other parameters. But until
that works, we should effectively deprecate top level tasks whenever
possible.
On Mar 22, 2016 2:52 AM, "Andrew Woodward"  wrote:

> I've mocked up the change to implementation using the already landed
> changes to ceph as an example
>
> https://review.openstack.org/295571
>
> On Mon, Mar 21, 2016 at 3:44 PM Andrew Woodward  wrote:
>
>> We had originally planned for the FFEs for both fuel-openstack-tasks[1]
>> and fuel-remove-conflict-openstack to [2] to close on 3/20, This would have
>> placed them before changes that conflict with
>> fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3].
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088297.html
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088298.html
>> [3]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/089028.html
>>
>> However we found this morning that the changes from [2], and more of
>> issue [1] will result in further issues such as [4], where as the task
>> files move, any task that explicitly relied on it, now no longer is in the
>> same path.
>>
>> [4] https://review.openstack.org/#/c/295170/
>>
>> Due to this newly identified issue with backwards comparability. It
>> appears that [4] shows that we have plugins using interfaces that we don't
>> have formal coverage for so If we introduce this set of changes, we will
>> cause breakage for plugins that use fuel's current tasks.
>>
>> From a deprecation standpoint we don't have a way to deal with this,
>> unless  fuel-openstack-tasks [1] lands after
>> fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3]. In this
>> case we can take advantage of the class include stubs, leaving a copy in
>> the old location (osnailyfacter/modular/roles/compute.pp) pointing to the
>> new include location (include openstack_tasks::roles::compute) and adding a
>> warning for deprecation. The tasks includes in the new location
>> openstack_tasks/examples/roles/compute.pp would simply include the updated
>> class location w/o the warning.
>>
>> This would take care of [1] and it's review [5]
>>
>> [5] https://review.openstack.org/283332
>>
>> This still leaves [2] un-addressed, we still have 3 open CR for it:
>>
>> [6] Compute https://review.openstack.org/285567
>> [7] Cinder https://review.openstack.org/294736
>> [8] Swift https://review.openstack.org/294979
>>
>> Compute [6] is in good shape, while Cinder [7] and Swift [8] are not. For
>> these do we want to continue to land them, if so what do we want to do
>> about the now deprecated openstack:: tasks? We could leave them in place
>> with a warning since we would not be using them
>>
>> --
>>
>> --
>>
>> Andrew Woodward
>>
>> Mirantis
>>
>> Fuel Community Ambassador
>>
>> Ceph Community
>>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose EmilienM for core

2016-03-22 Thread Giulio Fidente

On 03/20/2016 07:22 PM, Dan Prince wrote:

I'd like to propose that we add Emilien Macchi to the TripleO core
review team. Emilien has been getting more involved with TripleO during
this last release. In addition to help with various Puppet things he
also has experience in building OpenStack installation tooling,
upgrades, and would be a valuable prospective to the core team. He has
also added several new features around monitoring into instack-
undercloud.

Emilien is currently acting as the Puppet PTL. Adding him to the
TripleO core review team could help us move faster towards some of the
upcoming features like composable services, etc.

If you agree please +1. If there is no negative feedback I'll add him
next Monday.


+1 !


--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Contributor Awards

2016-03-22 Thread Tom Fifield

Reminder :)

We'll probably stop taking entries at the end of next week.

On 16/02/16 18:43, Tom Fifield wrote:

Hi all,

I'd like to introduce a new round of community awards handed out by the
Foundation, to be presented at the feedback session of the summit.

Nothing flashy or starchy - the idea is that these are to be a little
informal, quirky ... but still recognising the extremely valuable work
that we all do to make OpenStack excel.

There's so many different areas worthy of celebration, but we think that
there's a few main chunks of the community that need a little love,

* Those who might not be aware that they are valued, particularly new
contributors
* Those who are the active glue that binds the community together
* Those who share their hard-earned knowledge with others and mentor
* Those who challenge assumptions, and make us think

Since it's first time (recently, at least), rather than starting with a
defined set of awards, we'd like to have submissions of names in those
broad categories. Then we'll have a little bit of fun on the back-end
and try to come up with something that isn't just your standard set of
award titles, and iterate to success ;)

The submission form is here, so please submit anyone who you think is
deserving of an award!



https://docs.google.com/forms/d/1HP1jAobT-s4hlqZpmxoGIGTxZmY6lCWolS3zOq8miDk/viewform




in the meantime, let's use this thread to discuss the fun part: goodies.
What do you think we should lavish award winners with? Soft toys?
Perpetual trophies? baseball caps ?


Regards,


Tom, on behalf of the Foundation team



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn][ovn4nfv]

2016-03-22 Thread Gary Kotton
Hi,
Thanks for posting this. This is very interesting. I think that there are a 
number of different things to take into account here:

  1.  There is a service chaining project in Neutron 
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining. Is the API 
there sufficient or different?
  2.  I do not think that adding an extension to the OVN is the correct way to 
go here. That means that the community cannot share the common API (this is 
related to the point above). The plugin implementation details are specific to 
the networking solution.

Thanks
Gary

From: John McDowall 
>
Reply-To: OpenStack List 
>
Date: Monday, March 21, 2016 at 11:18 PM
To: OpenStack List 
>
Subject: [openstack-dev] [networking-ovn][ovn4nfv]

All,

As a VNF vendor we have been looking at ways to enable customers to simply
scale up (and down) VNF’s in complex virtual networks at scale. Our goal
is to help accelerate the deployment of SDN and VNF’s and more
specifically enable zero-trust security at scale for applications.  This
requires the easy and fast deployment of Next Generation Firewalls (and
other VNF¹s) into the traffic path of any application.

Over the last several weeks we have created a prototype that implements
a simple VNF insertion approach. Before we do additional work we have a
couple of questions for the community:

Questions
‹

1. This approach has the advantage of being very simple and works with
existing VNF¹s, does it make sense to the community?
2. If it is of interest how could it be improved and or enhanced to make
it more useful and consumable?

Design Guidelines
‹

At the start of the effort we created a set of design guidelines to
constrain the problem space.


* The goal is a Service Function Insertion (SFI) approach that is simpler
and easier to deploy than Service Function Chaining and is more applicable
to single function insertion or very short chains.
* The initial design target is DC/Enterprises where the requirements are
typically for insertion of a limited set of VNF¹s in specific network
locations to act on specific applications.
* Minimal changes to existing VNF, ours and others,
* Make the solution open to all VNF¹s.
* Leverage bump in the wire connectivity as this does not require L2 or L3
knowledge/configuration in the VNF.
* Firewalls want to inspect/classify all traffic on a link, so
pre-classifing traffic beyond ACL¹s is not necessary.
* Deploy on standard infrastructure; Openstack and Open vSwitch with
minimal changes
* Work with virtualization and containers and physical devices seamlessly.
* Insert and remove security is seconds, one of the drivers of the
requirement for speed is container deployment
* Simple to deploy and easy to debug is important - atomic insertion and
removal of VNF is an important aspect of this.


Approach


We have developed a prototype, roughly using the ovn4nfv model proposed by
Vikram Dham and others in OPNFV.  The implemented prototype of ovn4nfv is
on OpenVSwitch 2.5 and Openstack Mitaka (development branch). I would like
to stress this is a prototype and not production code. My objective was to
prove to myself (and others) that the concept would work and then ask for
feedback from the community on level of interest and how best to design a
production implementation.

I have called this effort service function insertion (SFI) to
differentiate from service function chaining (SFC). This approach is
simpler than SFC and requires minimal or no changes to existing VNF¹s that
act as a bump in the wire, but it will probably not handle long complex
chains or graphs. It can possibly handle chaining one or two VNF¹s in a
static manner, but not sure if it could go beyond that. I am open to
suggestions of how to extend/improve it.

The traffic steering is implemented by inserting 2 ingress and 2 egress
rules in the ovn-nb pipeline at ingress stage 3. These rules have a higher
priority than the default rules. The changes to OVN and rules are listed
in the implementation section.

The control plane is implemented in both Open vSwitch and in Openstack. In
Openstack there is a set of extension interfaces added to the
networking-ovn plugin. There are both CLI and REST API¹s provided for
Openstack and CLI for Open vSwitch.

The OVN model enables logical changes to the flow rules and Openstack
neutron plugin model allows separation of changes to extensions to the
networking-OVN plugin. I have however violated a few boundaries for
expediency that would need to be fixed before
this could be easily deployed.

We are happy to contribute the code back to the community, but would like
to gauge the level on interest and solicit feedback on the approach. We
are open to any and all suggestions for improvements in both

[openstack-dev] [neutron][dvr]How to get the relationship of veth pair device?

2016-03-22 Thread Zhi Chang
hi, all.
   
In DVR mode, a veth pair devices were generated when I create a floating ip 
and associate it to a vm. And one of the pair device in fip namespace, the 
other one in qrouter namespace. How can I get the relationship between the veth 
pair device?


I know the command "ethtool -S [device_name]" can get peer interface index. 
But if this interface in namepsace, how do I do?


BTW, what's the meaning of "rfp" and "fpr"? 




Thanks
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev